Test Report: KVM_Linux_crio 17348

                    
                      45bf4980d68735837852807807c59e04345b65bd:2023-10-04:31286
                    
                

Test fail (29/290)

Order failed test Duration
25 TestAddons/parallel/Ingress 159.06
37 TestAddons/StoppedEnableDisable 155.31
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 12.43
155 TestIngressAddonLegacy/serial/ValidateIngressAddons 178.22
203 TestMultiNode/serial/PingHostFrom2Pods 3.24
209 TestMultiNode/serial/RestartKeepsNodes 688.86
211 TestMultiNode/serial/StopMultiNode 143.54
218 TestPreload 182.7
224 TestRunningBinaryUpgrade 8.77
226 TestKubernetesUpgrade 90.46
229 TestStoppedBinaryUpgrade/Upgrade 4.54
230 TestStoppedBinaryUpgrade/MinikubeLogs 0.09
254 TestPause/serial/SecondStartNoReconfiguration 72.96
270 TestStartStop/group/no-preload/serial/Stop 139.86
274 TestStartStop/group/embed-certs/serial/Stop 140.4
276 TestStartStop/group/old-k8s-version/serial/Stop 140.08
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
286 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
298 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.16
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.39
301 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.56
302 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.59
303 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.61
304 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.55
305 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 405.23
306 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 296.3
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 184.01
324 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 143.32
x
+
TestAddons/parallel/Ingress (159.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-718830 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-718830 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Non-zero exit: kubectl --context addons-718830 replace --force -f testdata/nginx-ingress-v1.yaml: exit status 1 (630.847387ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": dial tcp 10.100.71.62:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:210: (dbg) Run:  kubectl --context addons-718830 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-718830 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d178caad-4b07-44be-bc0c-87060bf92e83] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d178caad-4b07-44be-bc0c-87060bf92e83] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.029622017s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-718830 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.134746737s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context addons-718830 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.89
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p addons-718830 addons disable ingress-dns --alsologtostderr -v=1: (1.752915834s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p addons-718830 addons disable ingress --alsologtostderr -v=1: (7.904107424s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-718830 -n addons-718830
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-718830 logs -n 25: (1.306121443s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC |                     |
	|         | -p download-only-054908                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC |                     |
	|         | -p download-only-054908                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC | 04 Oct 23 00:43 UTC |
	| delete  | -p download-only-054908                                                                     | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC | 04 Oct 23 00:43 UTC |
	| delete  | -p download-only-054908                                                                     | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC | 04 Oct 23 00:43 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-652416 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC |                     |
	|         | binary-mirror-652416                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37489                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-652416                                                                     | binary-mirror-652416 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC | 04 Oct 23 00:43 UTC |
	| start   | -p addons-718830 --wait=true                                                                | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC | 04 Oct 23 00:46 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | addons-718830                                                                               |                      |         |         |                     |                     |
	| addons  | addons-718830 addons                                                                        | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-718830 addons disable                                                                | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-718830 ip                                                                            | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	| addons  | addons-718830 addons disable                                                                | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | -p addons-718830                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-718830 ssh curl -s                                                                   | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ssh     | addons-718830 ssh cat                                                                       | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | /opt/local-path-provisioner/pvc-48c55315-2a94-4604-a9bc-b609ad992d89_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-718830 addons disable                                                                | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:47 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:46 UTC | 04 Oct 23 00:46 UTC |
	|         | addons-718830                                                                               |                      |         |         |                     |                     |
	| addons  | addons-718830 addons                                                                        | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:47 UTC | 04 Oct 23 00:47 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-718830 addons                                                                        | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:47 UTC | 04 Oct 23 00:47 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-718830 ip                                                                            | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:48 UTC | 04 Oct 23 00:48 UTC |
	| addons  | addons-718830 addons disable                                                                | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:48 UTC | 04 Oct 23 00:48 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-718830 addons disable                                                                | addons-718830        | jenkins | v1.31.2 | 04 Oct 23 00:48 UTC | 04 Oct 23 00:48 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 00:43:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 00:43:41.450309  135888 out.go:296] Setting OutFile to fd 1 ...
	I1004 00:43:41.450422  135888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:43:41.450431  135888 out.go:309] Setting ErrFile to fd 2...
	I1004 00:43:41.450436  135888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:43:41.450649  135888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 00:43:41.451250  135888 out.go:303] Setting JSON to false
	I1004 00:43:41.452040  135888 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5173,"bootTime":1696375049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 00:43:41.452100  135888 start.go:138] virtualization: kvm guest
	I1004 00:43:41.454770  135888 out.go:177] * [addons-718830] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 00:43:41.456057  135888 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 00:43:41.457284  135888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 00:43:41.456072  135888 notify.go:220] Checking for updates...
	I1004 00:43:41.459690  135888 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:43:41.461078  135888 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:43:41.462385  135888 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 00:43:41.463611  135888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 00:43:41.464893  135888 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 00:43:41.496895  135888 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 00:43:41.498187  135888 start.go:298] selected driver: kvm2
	I1004 00:43:41.498200  135888 start.go:902] validating driver "kvm2" against <nil>
	I1004 00:43:41.498211  135888 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 00:43:41.498856  135888 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:43:41.498942  135888 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 00:43:41.513642  135888 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 00:43:41.513695  135888 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 00:43:41.513952  135888 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 00:43:41.513988  135888 cni.go:84] Creating CNI manager for ""
	I1004 00:43:41.513997  135888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:43:41.514008  135888 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 00:43:41.514019  135888 start_flags.go:321] config:
	{Name:addons-718830 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-718830 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:43:41.514454  135888 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:43:41.516517  135888 out.go:177] * Starting control plane node addons-718830 in cluster addons-718830
	I1004 00:43:41.518255  135888 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 00:43:41.518306  135888 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 00:43:41.518324  135888 cache.go:57] Caching tarball of preloaded images
	I1004 00:43:41.518429  135888 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 00:43:41.518442  135888 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 00:43:41.518752  135888 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/config.json ...
	I1004 00:43:41.518788  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/config.json: {Name:mke2f6edca1c78e13b745e6354670249bfcfbe54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:43:41.518980  135888 start.go:365] acquiring machines lock for addons-718830: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 00:43:41.519045  135888 start.go:369] acquired machines lock for "addons-718830" in 45.765µs
	I1004 00:43:41.519070  135888 start.go:93] Provisioning new machine with config: &{Name:addons-718830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:addons-718830 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 00:43:41.519154  135888 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 00:43:41.521062  135888 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1004 00:43:41.521222  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:43:41.521251  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:43:41.535772  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I1004 00:43:41.536331  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:43:41.537032  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:43:41.537057  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:43:41.537540  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:43:41.537909  135888 main.go:141] libmachine: (addons-718830) Calling .GetMachineName
	I1004 00:43:41.538127  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:43:41.538392  135888 start.go:159] libmachine.API.Create for "addons-718830" (driver="kvm2")
	I1004 00:43:41.538436  135888 client.go:168] LocalClient.Create starting
	I1004 00:43:41.538494  135888 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 00:43:41.605765  135888 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 00:43:41.744658  135888 main.go:141] libmachine: Running pre-create checks...
	I1004 00:43:41.744686  135888 main.go:141] libmachine: (addons-718830) Calling .PreCreateCheck
	I1004 00:43:41.745238  135888 main.go:141] libmachine: (addons-718830) Calling .GetConfigRaw
	I1004 00:43:41.745712  135888 main.go:141] libmachine: Creating machine...
	I1004 00:43:41.745728  135888 main.go:141] libmachine: (addons-718830) Calling .Create
	I1004 00:43:41.745891  135888 main.go:141] libmachine: (addons-718830) Creating KVM machine...
	I1004 00:43:41.747037  135888 main.go:141] libmachine: (addons-718830) DBG | found existing default KVM network
	I1004 00:43:41.747886  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:41.747698  135910 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b70}
	I1004 00:43:41.753639  135888 main.go:141] libmachine: (addons-718830) DBG | trying to create private KVM network mk-addons-718830 192.168.39.0/24...
	I1004 00:43:41.820991  135888 main.go:141] libmachine: (addons-718830) DBG | private KVM network mk-addons-718830 192.168.39.0/24 created
	I1004 00:43:41.821018  135888 main.go:141] libmachine: (addons-718830) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830 ...
	I1004 00:43:41.821029  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:41.820937  135910 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:43:41.821080  135888 main.go:141] libmachine: (addons-718830) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 00:43:41.821114  135888 main.go:141] libmachine: (addons-718830) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 00:43:42.045379  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:42.045257  135910 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa...
	I1004 00:43:42.222602  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:42.222450  135910 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/addons-718830.rawdisk...
	I1004 00:43:42.222662  135888 main.go:141] libmachine: (addons-718830) DBG | Writing magic tar header
	I1004 00:43:42.222697  135888 main.go:141] libmachine: (addons-718830) DBG | Writing SSH key tar header
	I1004 00:43:42.222718  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:42.222604  135910 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830 ...
	I1004 00:43:42.222731  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830
	I1004 00:43:42.222755  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 00:43:42.222794  135888 main.go:141] libmachine: (addons-718830) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830 (perms=drwx------)
	I1004 00:43:42.222814  135888 main.go:141] libmachine: (addons-718830) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 00:43:42.222823  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:43:42.222835  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 00:43:42.222845  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 00:43:42.222855  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home/jenkins
	I1004 00:43:42.222863  135888 main.go:141] libmachine: (addons-718830) DBG | Checking permissions on dir: /home
	I1004 00:43:42.222873  135888 main.go:141] libmachine: (addons-718830) DBG | Skipping /home - not owner
	I1004 00:43:42.222883  135888 main.go:141] libmachine: (addons-718830) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 00:43:42.222893  135888 main.go:141] libmachine: (addons-718830) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 00:43:42.222900  135888 main.go:141] libmachine: (addons-718830) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 00:43:42.222907  135888 main.go:141] libmachine: (addons-718830) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 00:43:42.222916  135888 main.go:141] libmachine: (addons-718830) Creating domain...
	I1004 00:43:42.223940  135888 main.go:141] libmachine: (addons-718830) define libvirt domain using xml: 
	I1004 00:43:42.223961  135888 main.go:141] libmachine: (addons-718830) <domain type='kvm'>
	I1004 00:43:42.223969  135888 main.go:141] libmachine: (addons-718830)   <name>addons-718830</name>
	I1004 00:43:42.223982  135888 main.go:141] libmachine: (addons-718830)   <memory unit='MiB'>4000</memory>
	I1004 00:43:42.223991  135888 main.go:141] libmachine: (addons-718830)   <vcpu>2</vcpu>
	I1004 00:43:42.223997  135888 main.go:141] libmachine: (addons-718830)   <features>
	I1004 00:43:42.224009  135888 main.go:141] libmachine: (addons-718830)     <acpi/>
	I1004 00:43:42.224021  135888 main.go:141] libmachine: (addons-718830)     <apic/>
	I1004 00:43:42.224035  135888 main.go:141] libmachine: (addons-718830)     <pae/>
	I1004 00:43:42.224043  135888 main.go:141] libmachine: (addons-718830)     
	I1004 00:43:42.224049  135888 main.go:141] libmachine: (addons-718830)   </features>
	I1004 00:43:42.224060  135888 main.go:141] libmachine: (addons-718830)   <cpu mode='host-passthrough'>
	I1004 00:43:42.224088  135888 main.go:141] libmachine: (addons-718830)   
	I1004 00:43:42.224105  135888 main.go:141] libmachine: (addons-718830)   </cpu>
	I1004 00:43:42.224115  135888 main.go:141] libmachine: (addons-718830)   <os>
	I1004 00:43:42.224128  135888 main.go:141] libmachine: (addons-718830)     <type>hvm</type>
	I1004 00:43:42.224137  135888 main.go:141] libmachine: (addons-718830)     <boot dev='cdrom'/>
	I1004 00:43:42.224145  135888 main.go:141] libmachine: (addons-718830)     <boot dev='hd'/>
	I1004 00:43:42.224151  135888 main.go:141] libmachine: (addons-718830)     <bootmenu enable='no'/>
	I1004 00:43:42.224163  135888 main.go:141] libmachine: (addons-718830)   </os>
	I1004 00:43:42.224172  135888 main.go:141] libmachine: (addons-718830)   <devices>
	I1004 00:43:42.224177  135888 main.go:141] libmachine: (addons-718830)     <disk type='file' device='cdrom'>
	I1004 00:43:42.224221  135888 main.go:141] libmachine: (addons-718830)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/boot2docker.iso'/>
	I1004 00:43:42.224247  135888 main.go:141] libmachine: (addons-718830)       <target dev='hdc' bus='scsi'/>
	I1004 00:43:42.224263  135888 main.go:141] libmachine: (addons-718830)       <readonly/>
	I1004 00:43:42.224273  135888 main.go:141] libmachine: (addons-718830)     </disk>
	I1004 00:43:42.224287  135888 main.go:141] libmachine: (addons-718830)     <disk type='file' device='disk'>
	I1004 00:43:42.224297  135888 main.go:141] libmachine: (addons-718830)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 00:43:42.224310  135888 main.go:141] libmachine: (addons-718830)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/addons-718830.rawdisk'/>
	I1004 00:43:42.224324  135888 main.go:141] libmachine: (addons-718830)       <target dev='hda' bus='virtio'/>
	I1004 00:43:42.224337  135888 main.go:141] libmachine: (addons-718830)     </disk>
	I1004 00:43:42.224351  135888 main.go:141] libmachine: (addons-718830)     <interface type='network'>
	I1004 00:43:42.224366  135888 main.go:141] libmachine: (addons-718830)       <source network='mk-addons-718830'/>
	I1004 00:43:42.224379  135888 main.go:141] libmachine: (addons-718830)       <model type='virtio'/>
	I1004 00:43:42.224392  135888 main.go:141] libmachine: (addons-718830)     </interface>
	I1004 00:43:42.224405  135888 main.go:141] libmachine: (addons-718830)     <interface type='network'>
	I1004 00:43:42.224418  135888 main.go:141] libmachine: (addons-718830)       <source network='default'/>
	I1004 00:43:42.224432  135888 main.go:141] libmachine: (addons-718830)       <model type='virtio'/>
	I1004 00:43:42.224444  135888 main.go:141] libmachine: (addons-718830)     </interface>
	I1004 00:43:42.224457  135888 main.go:141] libmachine: (addons-718830)     <serial type='pty'>
	I1004 00:43:42.224468  135888 main.go:141] libmachine: (addons-718830)       <target port='0'/>
	I1004 00:43:42.224478  135888 main.go:141] libmachine: (addons-718830)     </serial>
	I1004 00:43:42.224486  135888 main.go:141] libmachine: (addons-718830)     <console type='pty'>
	I1004 00:43:42.224498  135888 main.go:141] libmachine: (addons-718830)       <target type='serial' port='0'/>
	I1004 00:43:42.224508  135888 main.go:141] libmachine: (addons-718830)     </console>
	I1004 00:43:42.224515  135888 main.go:141] libmachine: (addons-718830)     <rng model='virtio'>
	I1004 00:43:42.224526  135888 main.go:141] libmachine: (addons-718830)       <backend model='random'>/dev/random</backend>
	I1004 00:43:42.224535  135888 main.go:141] libmachine: (addons-718830)     </rng>
	I1004 00:43:42.224540  135888 main.go:141] libmachine: (addons-718830)     
	I1004 00:43:42.224548  135888 main.go:141] libmachine: (addons-718830)     
	I1004 00:43:42.224553  135888 main.go:141] libmachine: (addons-718830)   </devices>
	I1004 00:43:42.224559  135888 main.go:141] libmachine: (addons-718830) </domain>
	I1004 00:43:42.224566  135888 main.go:141] libmachine: (addons-718830) 
	I1004 00:43:42.229964  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:02:bc:25 in network default
	I1004 00:43:42.230499  135888 main.go:141] libmachine: (addons-718830) Ensuring networks are active...
	I1004 00:43:42.230527  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:42.231115  135888 main.go:141] libmachine: (addons-718830) Ensuring network default is active
	I1004 00:43:42.231388  135888 main.go:141] libmachine: (addons-718830) Ensuring network mk-addons-718830 is active
	I1004 00:43:42.231843  135888 main.go:141] libmachine: (addons-718830) Getting domain xml...
	I1004 00:43:42.232490  135888 main.go:141] libmachine: (addons-718830) Creating domain...
	I1004 00:43:43.647178  135888 main.go:141] libmachine: (addons-718830) Waiting to get IP...
	I1004 00:43:43.647980  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:43.648383  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:43.648440  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:43.648384  135910 retry.go:31] will retry after 247.148332ms: waiting for machine to come up
	I1004 00:43:43.896905  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:43.897442  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:43.897472  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:43.897405  135910 retry.go:31] will retry after 331.813067ms: waiting for machine to come up
	I1004 00:43:44.230854  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:44.231248  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:44.231283  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:44.231191  135910 retry.go:31] will retry after 456.443491ms: waiting for machine to come up
	I1004 00:43:44.688834  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:44.689299  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:44.689328  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:44.689263  135910 retry.go:31] will retry after 406.667092ms: waiting for machine to come up
	I1004 00:43:45.097868  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:45.098238  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:45.098269  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:45.098184  135910 retry.go:31] will retry after 739.794847ms: waiting for machine to come up
	I1004 00:43:45.839180  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:45.839566  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:45.839601  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:45.839506  135910 retry.go:31] will retry after 709.969512ms: waiting for machine to come up
	I1004 00:43:46.551243  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:46.551729  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:46.551765  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:46.551669  135910 retry.go:31] will retry after 784.791094ms: waiting for machine to come up
	I1004 00:43:47.337610  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:47.337985  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:47.338016  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:47.337923  135910 retry.go:31] will retry after 1.035240321s: waiting for machine to come up
	I1004 00:43:48.375123  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:48.375415  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:48.375447  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:48.375371  135910 retry.go:31] will retry after 1.70496621s: waiting for machine to come up
	I1004 00:43:50.081637  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:50.081959  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:50.081985  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:50.081910  135910 retry.go:31] will retry after 2.144946111s: waiting for machine to come up
	I1004 00:43:52.228588  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:52.229043  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:52.229085  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:52.228990  135910 retry.go:31] will retry after 1.943961336s: waiting for machine to come up
	I1004 00:43:54.175158  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:54.175570  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:54.175603  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:54.175512  135910 retry.go:31] will retry after 3.564404714s: waiting for machine to come up
	I1004 00:43:57.741508  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:43:57.741886  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:43:57.741912  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:43:57.741832  135910 retry.go:31] will retry after 3.183676525s: waiting for machine to come up
	I1004 00:44:00.929099  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:00.929490  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find current IP address of domain addons-718830 in network mk-addons-718830
	I1004 00:44:00.929520  135888 main.go:141] libmachine: (addons-718830) DBG | I1004 00:44:00.929425  135910 retry.go:31] will retry after 3.578085909s: waiting for machine to come up
	I1004 00:44:04.510814  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.511253  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has current primary IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.511275  135888 main.go:141] libmachine: (addons-718830) Found IP for machine: 192.168.39.89
	I1004 00:44:04.511319  135888 main.go:141] libmachine: (addons-718830) Reserving static IP address...
	I1004 00:44:04.511713  135888 main.go:141] libmachine: (addons-718830) DBG | unable to find host DHCP lease matching {name: "addons-718830", mac: "52:54:00:fb:fd:95", ip: "192.168.39.89"} in network mk-addons-718830
	I1004 00:44:04.585368  135888 main.go:141] libmachine: (addons-718830) DBG | Getting to WaitForSSH function...
	I1004 00:44:04.585395  135888 main.go:141] libmachine: (addons-718830) Reserved static IP address: 192.168.39.89
	I1004 00:44:04.585404  135888 main.go:141] libmachine: (addons-718830) Waiting for SSH to be available...
	I1004 00:44:04.587701  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.588012  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:04.588045  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.588164  135888 main.go:141] libmachine: (addons-718830) DBG | Using SSH client type: external
	I1004 00:44:04.588190  135888 main.go:141] libmachine: (addons-718830) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa (-rw-------)
	I1004 00:44:04.588213  135888 main.go:141] libmachine: (addons-718830) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 00:44:04.588226  135888 main.go:141] libmachine: (addons-718830) DBG | About to run SSH command:
	I1004 00:44:04.588235  135888 main.go:141] libmachine: (addons-718830) DBG | exit 0
	I1004 00:44:04.677598  135888 main.go:141] libmachine: (addons-718830) DBG | SSH cmd err, output: <nil>: 
	I1004 00:44:04.677893  135888 main.go:141] libmachine: (addons-718830) KVM machine creation complete!
	I1004 00:44:04.678133  135888 main.go:141] libmachine: (addons-718830) Calling .GetConfigRaw
	I1004 00:44:04.678678  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:04.678880  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:04.679076  135888 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 00:44:04.679098  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:04.680259  135888 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 00:44:04.680276  135888 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 00:44:04.680284  135888 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 00:44:04.680300  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:04.682306  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.682603  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:04.682632  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.682762  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:04.682933  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:04.683091  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:04.683237  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:04.683469  135888 main.go:141] libmachine: Using SSH client type: native
	I1004 00:44:04.683845  135888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1004 00:44:04.683861  135888 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 00:44:04.801313  135888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 00:44:04.801338  135888 main.go:141] libmachine: Detecting the provisioner...
	I1004 00:44:04.801347  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:04.804374  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.804675  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:04.804710  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.804882  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:04.805100  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:04.805289  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:04.805391  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:04.805534  135888 main.go:141] libmachine: Using SSH client type: native
	I1004 00:44:04.805866  135888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1004 00:44:04.805879  135888 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 00:44:04.922807  135888 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 00:44:04.922930  135888 main.go:141] libmachine: found compatible host: buildroot
	I1004 00:44:04.922942  135888 main.go:141] libmachine: Provisioning with buildroot...
	I1004 00:44:04.922951  135888 main.go:141] libmachine: (addons-718830) Calling .GetMachineName
	I1004 00:44:04.923233  135888 buildroot.go:166] provisioning hostname "addons-718830"
	I1004 00:44:04.923260  135888 main.go:141] libmachine: (addons-718830) Calling .GetMachineName
	I1004 00:44:04.923451  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:04.926093  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.926410  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:04.926445  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:04.926533  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:04.926754  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:04.926969  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:04.927149  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:04.927298  135888 main.go:141] libmachine: Using SSH client type: native
	I1004 00:44:04.927670  135888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1004 00:44:04.927691  135888 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-718830 && echo "addons-718830" | sudo tee /etc/hostname
	I1004 00:44:05.054089  135888 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-718830
	
	I1004 00:44:05.054126  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:05.056789  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.057104  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.057128  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.057287  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:05.057505  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.057672  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.057828  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:05.057983  135888 main.go:141] libmachine: Using SSH client type: native
	I1004 00:44:05.058314  135888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1004 00:44:05.058340  135888 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-718830' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-718830/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-718830' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 00:44:05.181996  135888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 00:44:05.182028  135888 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 00:44:05.182086  135888 buildroot.go:174] setting up certificates
	I1004 00:44:05.182104  135888 provision.go:83] configureAuth start
	I1004 00:44:05.182123  135888 main.go:141] libmachine: (addons-718830) Calling .GetMachineName
	I1004 00:44:05.182461  135888 main.go:141] libmachine: (addons-718830) Calling .GetIP
	I1004 00:44:05.185004  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.185285  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.185315  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.185510  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:05.187717  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.188040  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.188080  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.188189  135888 provision.go:138] copyHostCerts
	I1004 00:44:05.188252  135888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 00:44:05.188387  135888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 00:44:05.188463  135888 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 00:44:05.188514  135888 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.addons-718830 san=[192.168.39.89 192.168.39.89 localhost 127.0.0.1 minikube addons-718830]
	I1004 00:44:05.403146  135888 provision.go:172] copyRemoteCerts
	I1004 00:44:05.403211  135888 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 00:44:05.403235  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:05.405791  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.406170  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.406212  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.406387  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:05.406645  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.406825  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:05.406976  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:05.495659  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 00:44:05.518331  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1004 00:44:05.542196  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 00:44:05.564100  135888 provision.go:86] duration metric: configureAuth took 381.97775ms
	I1004 00:44:05.564128  135888 buildroot.go:189] setting minikube options for container-runtime
	I1004 00:44:05.564359  135888 config.go:182] Loaded profile config "addons-718830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 00:44:05.564460  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:05.567067  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.567346  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.567382  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.567497  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:05.567776  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.567933  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.568040  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:05.568224  135888 main.go:141] libmachine: Using SSH client type: native
	I1004 00:44:05.568678  135888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1004 00:44:05.568707  135888 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 00:44:05.877198  135888 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 00:44:05.877234  135888 main.go:141] libmachine: Checking connection to Docker...
	I1004 00:44:05.877259  135888 main.go:141] libmachine: (addons-718830) Calling .GetURL
	I1004 00:44:05.878423  135888 main.go:141] libmachine: (addons-718830) DBG | Using libvirt version 6000000
	I1004 00:44:05.880616  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.880977  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.881004  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.881180  135888 main.go:141] libmachine: Docker is up and running!
	I1004 00:44:05.881200  135888 main.go:141] libmachine: Reticulating splines...
	I1004 00:44:05.881210  135888 client.go:171] LocalClient.Create took 24.34276001s
	I1004 00:44:05.881238  135888 start.go:167] duration metric: libmachine.API.Create for "addons-718830" took 24.342848677s
	I1004 00:44:05.881262  135888 start.go:300] post-start starting for "addons-718830" (driver="kvm2")
	I1004 00:44:05.881279  135888 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 00:44:05.881308  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:05.881578  135888 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 00:44:05.881609  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:05.883752  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.884038  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.884077  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.884274  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:05.884454  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.884647  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:05.884793  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:05.972225  135888 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 00:44:05.976384  135888 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 00:44:05.976438  135888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 00:44:05.976521  135888 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 00:44:05.976545  135888 start.go:303] post-start completed in 95.273633ms
	I1004 00:44:05.976581  135888 main.go:141] libmachine: (addons-718830) Calling .GetConfigRaw
	I1004 00:44:05.977109  135888 main.go:141] libmachine: (addons-718830) Calling .GetIP
	I1004 00:44:05.979539  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.979879  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.979913  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.980094  135888 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/config.json ...
	I1004 00:44:05.980259  135888 start.go:128] duration metric: createHost completed in 24.461094827s
	I1004 00:44:05.980283  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:05.982317  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.982612  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:05.982639  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:05.982740  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:05.982964  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.983119  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:05.983260  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:05.983386  135888 main.go:141] libmachine: Using SSH client type: native
	I1004 00:44:05.983718  135888 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1004 00:44:05.983734  135888 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 00:44:06.102575  135888 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696380246.071841212
	
	I1004 00:44:06.102599  135888 fix.go:206] guest clock: 1696380246.071841212
	I1004 00:44:06.102609  135888 fix.go:219] Guest: 2023-10-04 00:44:06.071841212 +0000 UTC Remote: 2023-10-04 00:44:05.98027178 +0000 UTC m=+24.561330953 (delta=91.569432ms)
	I1004 00:44:06.102664  135888 fix.go:190] guest clock delta is within tolerance: 91.569432ms
	I1004 00:44:06.102672  135888 start.go:83] releasing machines lock for "addons-718830", held for 24.583615717s
	I1004 00:44:06.102705  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:06.103003  135888 main.go:141] libmachine: (addons-718830) Calling .GetIP
	I1004 00:44:06.105677  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:06.106080  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:06.106116  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:06.106222  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:06.106853  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:06.107027  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:06.107121  135888 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 00:44:06.107169  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:06.107294  135888 ssh_runner.go:195] Run: cat /version.json
	I1004 00:44:06.107318  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:06.109916  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:06.109943  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:06.110117  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:06.110142  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:06.110283  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:06.110451  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:06.110463  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:06.110477  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:06.110614  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:06.110688  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:06.110805  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:06.110867  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:06.110950  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:06.111103  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:06.221750  135888 ssh_runner.go:195] Run: systemctl --version
	I1004 00:44:06.227459  135888 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 00:44:06.382774  135888 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 00:44:06.389126  135888 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 00:44:06.389207  135888 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 00:44:06.403380  135888 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 00:44:06.403408  135888 start.go:469] detecting cgroup driver to use...
	I1004 00:44:06.403475  135888 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 00:44:06.419943  135888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 00:44:06.435045  135888 docker.go:197] disabling cri-docker service (if available) ...
	I1004 00:44:06.435100  135888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 00:44:06.450725  135888 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 00:44:06.467927  135888 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 00:44:06.590134  135888 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 00:44:06.717501  135888 docker.go:213] disabling docker service ...
	I1004 00:44:06.717567  135888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 00:44:06.730891  135888 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 00:44:06.742721  135888 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 00:44:06.854609  135888 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 00:44:06.961616  135888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 00:44:06.974015  135888 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 00:44:06.991639  135888 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 00:44:06.991708  135888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:44:07.000807  135888 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 00:44:07.000873  135888 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:44:07.009931  135888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:44:07.019217  135888 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:44:07.028537  135888 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 00:44:07.037858  135888 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 00:44:07.046050  135888 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 00:44:07.046130  135888 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 00:44:07.059317  135888 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 00:44:07.067569  135888 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 00:44:07.172755  135888 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 00:44:07.346235  135888 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 00:44:07.346340  135888 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 00:44:07.351572  135888 start.go:537] Will wait 60s for crictl version
	I1004 00:44:07.351640  135888 ssh_runner.go:195] Run: which crictl
	I1004 00:44:07.355262  135888 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 00:44:07.393960  135888 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 00:44:07.394054  135888 ssh_runner.go:195] Run: crio --version
	I1004 00:44:07.441395  135888 ssh_runner.go:195] Run: crio --version
	I1004 00:44:07.489475  135888 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 00:44:07.490991  135888 main.go:141] libmachine: (addons-718830) Calling .GetIP
	I1004 00:44:07.493549  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:07.493956  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:07.493984  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:07.494202  135888 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 00:44:07.498197  135888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 00:44:07.509795  135888 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 00:44:07.509864  135888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 00:44:07.550367  135888 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 00:44:07.550444  135888 ssh_runner.go:195] Run: which lz4
	I1004 00:44:07.554564  135888 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 00:44:07.558533  135888 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 00:44:07.558563  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 00:44:09.225227  135888 crio.go:444] Took 1.670717 seconds to copy over tarball
	I1004 00:44:09.225308  135888 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 00:44:12.303747  135888 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.078403693s)
	I1004 00:44:12.303784  135888 crio.go:451] Took 3.078527 seconds to extract the tarball
	I1004 00:44:12.303793  135888 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 00:44:12.345442  135888 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 00:44:12.411251  135888 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 00:44:12.411281  135888 cache_images.go:84] Images are preloaded, skipping loading
	I1004 00:44:12.411342  135888 ssh_runner.go:195] Run: crio config
	I1004 00:44:12.469078  135888 cni.go:84] Creating CNI manager for ""
	I1004 00:44:12.469107  135888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:44:12.469133  135888 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 00:44:12.469158  135888 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-718830 NodeName:addons-718830 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 00:44:12.469298  135888 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-718830"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 00:44:12.469379  135888 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-718830 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-718830 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 00:44:12.469436  135888 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 00:44:12.479628  135888 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 00:44:12.479713  135888 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 00:44:12.488907  135888 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1004 00:44:12.506368  135888 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 00:44:12.522021  135888 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1004 00:44:12.544487  135888 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I1004 00:44:12.548883  135888 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 00:44:12.562729  135888 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830 for IP: 192.168.39.89
	I1004 00:44:12.562762  135888 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:12.562909  135888 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 00:44:12.883267  135888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt ...
	I1004 00:44:12.883304  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt: {Name:mk701ddf0aad68f4fad35295248e0289ecfe59d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:12.883495  135888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key ...
	I1004 00:44:12.883507  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key: {Name:mk453289cc9ed0be7310c6d9e18c5a4433ba6483 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:12.883589  135888 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 00:44:13.063430  135888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt ...
	I1004 00:44:13.063464  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt: {Name:mk351ddeb226a3e84f6a5e9aa5e9a69c897c9c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.063648  135888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key ...
	I1004 00:44:13.063660  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key: {Name:mk80a80eb68ba03fcc8201071fbd39359a2d645d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.063765  135888 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.key
	I1004 00:44:13.063783  135888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt with IP's: []
	I1004 00:44:13.269862  135888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt ...
	I1004 00:44:13.269894  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: {Name:mk7705de71ea1d0e36e6a6d0a9d329cdb754fba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.270043  135888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.key ...
	I1004 00:44:13.270057  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.key: {Name:mkce9ac2ec18819ffcc2fdddf5663f1ef3fcaf1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.270125  135888 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.key.ed36cc3e
	I1004 00:44:13.270143  135888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.crt.ed36cc3e with IP's: [192.168.39.89 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 00:44:13.374218  135888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.crt.ed36cc3e ...
	I1004 00:44:13.374252  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.crt.ed36cc3e: {Name:mkebe5c6c073386e18e35be852aa66b7646580c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.374435  135888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.key.ed36cc3e ...
	I1004 00:44:13.374446  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.key.ed36cc3e: {Name:mk588d48dedcdfd127979fc93fa57cf6f4c009a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.374513  135888 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.crt.ed36cc3e -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.crt
	I1004 00:44:13.374586  135888 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.key.ed36cc3e -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.key
	I1004 00:44:13.374640  135888 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.key
	I1004 00:44:13.374656  135888 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.crt with IP's: []
	I1004 00:44:13.545733  135888 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.crt ...
	I1004 00:44:13.545762  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.crt: {Name:mk20dc7c8a4e67c90cf600714b647ef3ceb67c81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.545928  135888 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.key ...
	I1004 00:44:13.545939  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.key: {Name:mk6db3ee1b215dc4331c5f4f236a1d72244869a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:13.546096  135888 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 00:44:13.546131  135888 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 00:44:13.546154  135888 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 00:44:13.546180  135888 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 00:44:13.546715  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 00:44:13.570176  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 00:44:13.592341  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 00:44:13.614428  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 00:44:13.636573  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 00:44:13.661216  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 00:44:13.683182  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 00:44:13.704732  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 00:44:13.727461  135888 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 00:44:13.749104  135888 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 00:44:13.765127  135888 ssh_runner.go:195] Run: openssl version
	I1004 00:44:13.770588  135888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 00:44:13.781912  135888 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:44:13.786449  135888 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:44:13.786499  135888 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:44:13.792002  135888 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 00:44:13.802382  135888 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 00:44:13.806589  135888 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 00:44:13.806650  135888 kubeadm.go:404] StartCluster: {Name:addons-718830 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:addons-718830 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:44:13.806742  135888 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 00:44:13.806783  135888 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 00:44:13.849206  135888 cri.go:89] found id: ""
	I1004 00:44:13.849275  135888 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 00:44:13.859343  135888 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 00:44:13.869131  135888 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 00:44:13.879944  135888 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 00:44:13.879992  135888 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 00:44:13.932320  135888 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 00:44:13.932367  135888 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 00:44:14.068142  135888 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 00:44:14.068264  135888 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 00:44:14.068385  135888 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 00:44:14.301327  135888 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 00:44:14.441526  135888 out.go:204]   - Generating certificates and keys ...
	I1004 00:44:14.441702  135888 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 00:44:14.441808  135888 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 00:44:14.451024  135888 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 00:44:14.581197  135888 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 00:44:14.797768  135888 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 00:44:15.110804  135888 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 00:44:15.232848  135888 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 00:44:15.233052  135888 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-718830 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I1004 00:44:15.417729  135888 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 00:44:15.418021  135888 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-718830 localhost] and IPs [192.168.39.89 127.0.0.1 ::1]
	I1004 00:44:15.580253  135888 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 00:44:15.655595  135888 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 00:44:15.992058  135888 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 00:44:15.992283  135888 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 00:44:16.121145  135888 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 00:44:16.557817  135888 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 00:44:16.859714  135888 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 00:44:17.103473  135888 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 00:44:17.104314  135888 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 00:44:17.108321  135888 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 00:44:17.110449  135888 out.go:204]   - Booting up control plane ...
	I1004 00:44:17.110608  135888 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 00:44:17.110699  135888 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 00:44:17.110793  135888 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 00:44:17.125996  135888 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 00:44:17.126390  135888 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 00:44:17.126539  135888 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 00:44:17.258338  135888 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 00:44:25.256877  135888 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003565 seconds
	I1004 00:44:25.257030  135888 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 00:44:25.277346  135888 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 00:44:25.830514  135888 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 00:44:25.830769  135888 kubeadm.go:322] [mark-control-plane] Marking the node addons-718830 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 00:44:26.346988  135888 kubeadm.go:322] [bootstrap-token] Using token: o9785x.bm4g5t8pgt98newh
	I1004 00:44:26.348591  135888 out.go:204]   - Configuring RBAC rules ...
	I1004 00:44:26.348759  135888 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 00:44:26.359663  135888 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 00:44:26.369289  135888 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 00:44:26.373924  135888 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 00:44:26.379896  135888 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 00:44:26.388354  135888 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 00:44:26.403882  135888 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 00:44:26.638360  135888 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 00:44:26.781497  135888 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 00:44:26.782782  135888 kubeadm.go:322] 
	I1004 00:44:26.782876  135888 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 00:44:26.782889  135888 kubeadm.go:322] 
	I1004 00:44:26.782981  135888 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 00:44:26.782999  135888 kubeadm.go:322] 
	I1004 00:44:26.783050  135888 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 00:44:26.783187  135888 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 00:44:26.783324  135888 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 00:44:26.783347  135888 kubeadm.go:322] 
	I1004 00:44:26.783413  135888 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 00:44:26.783423  135888 kubeadm.go:322] 
	I1004 00:44:26.783525  135888 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 00:44:26.783544  135888 kubeadm.go:322] 
	I1004 00:44:26.783629  135888 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 00:44:26.783732  135888 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 00:44:26.783826  135888 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 00:44:26.783836  135888 kubeadm.go:322] 
	I1004 00:44:26.783934  135888 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 00:44:26.784041  135888 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 00:44:26.784051  135888 kubeadm.go:322] 
	I1004 00:44:26.784157  135888 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o9785x.bm4g5t8pgt98newh \
	I1004 00:44:26.784319  135888 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 00:44:26.784343  135888 kubeadm.go:322] 	--control-plane 
	I1004 00:44:26.784355  135888 kubeadm.go:322] 
	I1004 00:44:26.784471  135888 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 00:44:26.784531  135888 kubeadm.go:322] 
	I1004 00:44:26.784641  135888 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o9785x.bm4g5t8pgt98newh \
	I1004 00:44:26.784779  135888 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 00:44:26.784960  135888 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 00:44:26.784994  135888 cni.go:84] Creating CNI manager for ""
	I1004 00:44:26.785004  135888 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:44:26.787254  135888 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 00:44:26.788934  135888 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 00:44:26.853141  135888 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 00:44:26.888414  135888 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 00:44:26.888507  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:26.888535  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=addons-718830 minikube.k8s.io/updated_at=2023_10_04T00_44_26_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:26.947098  135888 ops.go:34] apiserver oom_adj: -16
	I1004 00:44:27.108086  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:27.199722  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:27.787155  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:28.286903  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:28.787228  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:29.287033  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:29.787335  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:30.287132  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:30.787139  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:31.287887  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:31.787524  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:32.287585  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:32.787850  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:33.287595  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:33.787046  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:34.287089  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:34.787545  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:35.287256  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:35.787831  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:36.287662  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:36.787507  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:37.287084  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:37.787452  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:38.287535  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:38.787605  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:39.287089  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:39.787235  135888 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:44:39.951916  135888 kubeadm.go:1081] duration metric: took 13.063472558s to wait for elevateKubeSystemPrivileges.
	I1004 00:44:39.951951  135888 kubeadm.go:406] StartCluster complete in 26.145308314s
	I1004 00:44:39.951976  135888 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:39.952111  135888 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:44:39.952477  135888 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:44:39.952701  135888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 00:44:39.952853  135888 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1004 00:44:39.952951  135888 config.go:182] Loaded profile config "addons-718830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 00:44:39.952964  135888 addons.go:69] Setting ingress=true in profile "addons-718830"
	I1004 00:44:39.952976  135888 addons.go:69] Setting ingress-dns=true in profile "addons-718830"
	I1004 00:44:39.952981  135888 addons.go:69] Setting default-storageclass=true in profile "addons-718830"
	I1004 00:44:39.952994  135888 addons.go:231] Setting addon ingress-dns=true in "addons-718830"
	I1004 00:44:39.952996  135888 addons.go:69] Setting gcp-auth=true in profile "addons-718830"
	I1004 00:44:39.952999  135888 addons.go:69] Setting helm-tiller=true in profile "addons-718830"
	I1004 00:44:39.953010  135888 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-718830"
	I1004 00:44:39.953020  135888 addons.go:69] Setting cloud-spanner=true in profile "addons-718830"
	I1004 00:44:39.953022  135888 addons.go:69] Setting inspektor-gadget=true in profile "addons-718830"
	I1004 00:44:39.953032  135888 addons.go:231] Setting addon cloud-spanner=true in "addons-718830"
	I1004 00:44:39.953036  135888 addons.go:231] Setting addon inspektor-gadget=true in "addons-718830"
	I1004 00:44:39.953038  135888 addons.go:69] Setting registry=true in profile "addons-718830"
	I1004 00:44:39.953066  135888 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-718830"
	I1004 00:44:39.953075  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953077  135888 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-718830"
	I1004 00:44:39.953082  135888 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-718830"
	I1004 00:44:39.953085  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953084  135888 addons.go:69] Setting metrics-server=true in profile "addons-718830"
	I1004 00:44:39.953107  135888 addons.go:231] Setting addon metrics-server=true in "addons-718830"
	I1004 00:44:39.953130  135888 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-718830"
	I1004 00:44:39.953163  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953167  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953055  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953483  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.952991  135888 addons.go:231] Setting addon ingress=true in "addons-718830"
	I1004 00:44:39.953498  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953505  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953517  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953516  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953528  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953533  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953541  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953574  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953012  135888 addons.go:231] Setting addon helm-tiller=true in "addons-718830"
	I1004 00:44:39.953483  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953606  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953014  135888 mustload.go:65] Loading cluster: addons-718830
	I1004 00:44:39.953483  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953639  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.952967  135888 addons.go:69] Setting volumesnapshots=true in profile "addons-718830"
	I1004 00:44:39.953720  135888 addons.go:231] Setting addon volumesnapshots=true in "addons-718830"
	I1004 00:44:39.953746  135888 addons.go:69] Setting storage-provisioner=true in profile "addons-718830"
	I1004 00:44:39.953765  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953784  135888 addons.go:231] Setting addon storage-provisioner=true in "addons-718830"
	I1004 00:44:39.953807  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.953068  135888 addons.go:231] Setting addon registry=true in "addons-718830"
	I1004 00:44:39.953867  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953896  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953914  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.954063  135888 config.go:182] Loaded profile config "addons-718830": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 00:44:39.954126  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.954154  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.954336  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.954380  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.953898  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.954428  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.953769  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.954605  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.954626  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.954711  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.954736  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.954158  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.954951  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.974663  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38537
	I1004 00:44:39.974882  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I1004 00:44:39.975327  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.975511  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.975989  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.976007  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.976165  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.976187  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.976352  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.976589  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.977016  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.977059  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.977111  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.977151  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.980330  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45849
	I1004 00:44:39.980787  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.982808  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I1004 00:44:39.983217  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.983411  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.983426  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.983810  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.983825  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.984253  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.987636  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.988118  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.988154  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
	I1004 00:44:39.988174  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.988267  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.988306  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.988488  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.988618  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45721
	I1004 00:44:39.989011  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.989034  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.989118  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.989358  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.989529  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.989553  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.989606  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:39.989882  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.990028  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:39.990368  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1004 00:44:39.990816  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:39.991344  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:39.991372  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:39.993014  135888 addons.go:231] Setting addon default-storageclass=true in "addons-718830"
	I1004 00:44:39.993058  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.993548  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.993608  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.995356  135888 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-718830"
	I1004 00:44:39.995400  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:39.995895  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.995950  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:39.996349  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:39.996566  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I1004 00:44:39.997342  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:39.997385  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.007018  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.007173  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35449
	I1004 00:44:40.008010  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.008032  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38407
	I1004 00:44:40.008042  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.008504  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.008785  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.008804  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.009624  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.009653  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.010095  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.010896  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.010942  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.011229  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.012081  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35751
	I1004 00:44:40.013986  135888 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 00:44:40.012587  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.013294  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36585
	I1004 00:44:40.014419  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.015358  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40175
	I1004 00:44:40.016047  135888 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 00:44:40.016070  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 00:44:40.016098  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.017013  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.017032  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.017114  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.017193  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.017392  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.017902  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.017927  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.018225  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.018263  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.018286  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.018325  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.030545  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.030563  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35009
	I1004 00:44:40.030577  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.030582  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.030618  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.030545  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.030641  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.030780  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.030801  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.030815  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.031443  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.031910  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.031951  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.032158  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.032238  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.033516  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.033554  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.034232  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.034519  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.036125  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.036144  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.036329  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.038483  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1004 00:44:40.036829  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.037541  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I1004 00:44:40.039990  135888 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1004 00:44:40.040012  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1004 00:44:40.040035  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.040169  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39149
	I1004 00:44:40.040703  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.040745  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.040811  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.040824  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.041294  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.041313  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.041448  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.041478  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.041746  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.041803  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.042383  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.042418  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.042589  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.043477  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33951
	I1004 00:44:40.044049  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.044213  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.044659  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.044677  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.044732  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.046645  135888 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1004 00:44:40.045093  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I1004 00:44:40.045138  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.045742  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.045915  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.046999  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35091
	I1004 00:44:40.048317  135888 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1004 00:44:40.048329  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1004 00:44:40.048350  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.048484  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.048877  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.049048  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.049152  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I1004 00:44:40.049512  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.049593  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.049662  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.050062  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.050079  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.050122  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.050460  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.050529  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.050595  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.050609  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.050816  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.051404  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.051423  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.051450  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.053865  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1004 00:44:40.052137  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.052736  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.052958  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:40.053773  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.054474  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.056438  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1004 00:44:40.055515  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.057762  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.055663  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.055704  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.056123  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.057887  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.056164  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:40.058035  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:40.056176  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38467
	I1004 00:44:40.057728  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1004 00:44:40.058785  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.059170  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.059534  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32873
	I1004 00:44:40.059931  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.060564  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1004 00:44:40.060808  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.060904  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34663
	I1004 00:44:40.061080  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.061233  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.062098  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.061988  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1004 00:44:40.062011  135888 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1004 00:44:40.062798  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.063027  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.063128  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.065295  135888 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 00:44:40.065313  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 00:44:40.065327  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.064056  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1004 00:44:40.064138  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.064291  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.064676  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.066780  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.068746  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1004 00:44:40.067554  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.068352  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.068382  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.069055  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.070330  135888 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1004 00:44:40.069066  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.069075  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.069081  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.069113  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40169
	I1004 00:44:40.069255  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.069430  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1004 00:44:40.069719  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.071037  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39141
	I1004 00:44:40.071568  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1004 00:44:40.071591  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1004 00:44:40.071614  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.071667  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.072432  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.073787  135888 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1004 00:44:40.072622  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.072835  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.072890  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.073285  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.074237  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.074493  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.075281  135888 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1004 00:44:40.075299  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1004 00:44:40.075318  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.077002  135888 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1004 00:44:40.076254  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.076337  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.076707  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.077335  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.077476  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.079902  135888 out.go:177]   - Using image docker.io/busybox:stable
	I1004 00:44:40.078665  135888 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1004 00:44:40.078696  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.078720  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.078743  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.078754  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.078928  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.079356  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.080083  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.081202  135888 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 00:44:40.081215  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1004 00:44:40.081229  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.081235  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.081317  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.081764  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.082013  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.082481  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.083053  135888 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1004 00:44:40.082534  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.083102  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.083315  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.083346  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.083760  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.083928  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.084672  135888 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1004 00:44:40.084584  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.086305  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.086342  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.085650  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.085826  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.086079  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.086439  135888 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 00:44:40.086454  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1004 00:44:40.086473  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.087158  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.087203  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37925
	I1004 00:44:40.087234  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.087332  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.087412  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42529
	I1004 00:44:40.089252  135888 out.go:177]   - Using image docker.io/registry:2.8.1
	I1004 00:44:40.087794  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.087963  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.088000  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.088038  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:40.089062  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.089787  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.089929  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.089936  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:40.089954  135888 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 00:44:40.090121  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.090666  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.091128  135888 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1004 00:44:40.092498  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.093937  135888 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1004 00:44:40.092512  135888 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I1004 00:44:40.093956  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1004 00:44:40.093979  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.092547  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.092568  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 00:44:40.095550  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.095560  135888 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1004 00:44:40.095572  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.092740  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.095573  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1004 00:44:40.092522  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:40.095637  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.094273  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.095786  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.095928  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.096604  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.096629  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:40.097966  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:40.098581  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.099064  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.099100  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.099250  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.099303  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.099418  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.099551  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.099720  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.099737  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.099921  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:40.099960  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.100308  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.102708  135888 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1004 00:44:40.100340  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.100465  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.100614  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.100963  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.104133  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.104168  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.104264  135888 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 00:44:40.104273  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1004 00:44:40.104289  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:40.104585  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.104643  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.104810  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.104956  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:40.107174  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.107555  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:40.107577  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:40.107724  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:40.107893  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:40.108046  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:40.108204  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	W1004 00:44:40.109579  135888 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:35668->192.168.39.89:22: read: connection reset by peer
	I1004 00:44:40.109597  135888 retry.go:31] will retry after 189.755507ms: ssh: handshake failed: read tcp 192.168.39.1:35668->192.168.39.89:22: read: connection reset by peer
	I1004 00:44:40.232423  135888 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-718830" context rescaled to 1 replicas
	I1004 00:44:40.232459  135888 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 00:44:40.234472  135888 out.go:177] * Verifying Kubernetes components...
	I1004 00:44:40.235992  135888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 00:44:40.340272  135888 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1004 00:44:40.340300  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1004 00:44:40.340697  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1004 00:44:40.346470  135888 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 00:44:40.346489  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1004 00:44:40.410383  135888 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 00:44:40.411025  135888 node_ready.go:35] waiting up to 6m0s for node "addons-718830" to be "Ready" ...
	I1004 00:44:40.431326  135888 node_ready.go:49] node "addons-718830" has status "Ready":"True"
	I1004 00:44:40.431353  135888 node_ready.go:38] duration metric: took 20.303207ms waiting for node "addons-718830" to be "Ready" ...
	I1004 00:44:40.431365  135888 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 00:44:40.443428  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 00:44:40.464272  135888 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace to be "Ready" ...
	I1004 00:44:40.466824  135888 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1004 00:44:40.466843  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1004 00:44:40.487836  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 00:44:40.493636  135888 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1004 00:44:40.493659  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1004 00:44:40.497325  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1004 00:44:40.521692  135888 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1004 00:44:40.521718  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1004 00:44:40.527227  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1004 00:44:40.561219  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1004 00:44:40.561242  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1004 00:44:40.575344  135888 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 00:44:40.575368  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 00:44:40.603161  135888 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1004 00:44:40.603186  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1004 00:44:40.683582  135888 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1004 00:44:40.683613  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1004 00:44:40.722826  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1004 00:44:40.752157  135888 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1004 00:44:40.752182  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1004 00:44:40.752201  135888 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1004 00:44:40.752216  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1004 00:44:40.855289  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1004 00:44:40.855318  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1004 00:44:40.948708  135888 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 00:44:40.948734  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 00:44:40.974441  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1004 00:44:40.992420  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1004 00:44:41.002770  135888 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1004 00:44:41.002796  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1004 00:44:41.205137  135888 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1004 00:44:41.205162  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1004 00:44:41.236123  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1004 00:44:41.236145  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1004 00:44:41.248137  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1004 00:44:41.248169  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1004 00:44:41.248399  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 00:44:41.313394  135888 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1004 00:44:41.313418  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1004 00:44:41.315391  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1004 00:44:41.315415  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1004 00:44:41.331609  135888 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 00:44:41.331631  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1004 00:44:41.389597  135888 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1004 00:44:41.389626  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1004 00:44:41.393665  135888 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1004 00:44:41.393689  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1004 00:44:41.398911  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 00:44:41.470789  135888 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1004 00:44:41.470821  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1004 00:44:41.477879  135888 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1004 00:44:41.477905  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1004 00:44:41.536890  135888 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1004 00:44:41.536920  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1004 00:44:41.548068  135888 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 00:44:41.548090  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1004 00:44:41.590681  135888 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1004 00:44:41.590704  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1004 00:44:41.602754  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1004 00:44:41.696361  135888 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1004 00:44:41.696391  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1004 00:44:41.745799  135888 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 00:44:41.745830  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1004 00:44:41.780432  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1004 00:44:43.652722  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:45.833412  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:46.813567  135888 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1004 00:44:46.813617  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:46.817350  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:46.817796  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:46.817832  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:46.818034  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:46.818264  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:46.818464  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:46.818622  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:47.054640  135888 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1004 00:44:47.291208  135888 addons.go:231] Setting addon gcp-auth=true in "addons-718830"
	I1004 00:44:47.291284  135888 host.go:66] Checking if "addons-718830" exists ...
	I1004 00:44:47.291734  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:47.291797  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:47.307500  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
	I1004 00:44:47.307984  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:47.308585  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:47.308618  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:47.308966  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:47.309457  135888 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:44:47.309510  135888 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:44:47.325273  135888 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34623
	I1004 00:44:47.325759  135888 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:44:47.326319  135888 main.go:141] libmachine: Using API Version  1
	I1004 00:44:47.326349  135888 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:44:47.326695  135888 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:44:47.326892  135888 main.go:141] libmachine: (addons-718830) Calling .GetState
	I1004 00:44:47.328777  135888 main.go:141] libmachine: (addons-718830) Calling .DriverName
	I1004 00:44:47.329002  135888 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1004 00:44:47.329023  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHHostname
	I1004 00:44:47.331906  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:47.332280  135888 main.go:141] libmachine: (addons-718830) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:fd:95", ip: ""} in network mk-addons-718830: {Iface:virbr1 ExpiryTime:2023-10-04 01:43:57 +0000 UTC Type:0 Mac:52:54:00:fb:fd:95 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:addons-718830 Clientid:01:52:54:00:fb:fd:95}
	I1004 00:44:47.332322  135888 main.go:141] libmachine: (addons-718830) DBG | domain addons-718830 has defined IP address 192.168.39.89 and MAC address 52:54:00:fb:fd:95 in network mk-addons-718830
	I1004 00:44:47.332496  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHPort
	I1004 00:44:47.332677  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHKeyPath
	I1004 00:44:47.332823  135888 main.go:141] libmachine: (addons-718830) Calling .GetSSHUsername
	I1004 00:44:47.332956  135888 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/addons-718830/id_rsa Username:docker}
	I1004 00:44:48.074123  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:48.569423  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.228684967s)
	I1004 00:44:48.569471  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569482  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569493  135888 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (8.159076262s)
	I1004 00:44:48.569521  135888 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 00:44:48.569592  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.126136346s)
	I1004 00:44:48.569624  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569636  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569646  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.081779376s)
	I1004 00:44:48.569684  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569700  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569731  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.072380956s)
	I1004 00:44:48.569772  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569778  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.569811  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.569826  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569846  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569850  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.569811  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569878  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.042621709s)
	I1004 00:44:48.569897  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.569900  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569909  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.569914  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569919  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569927  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569934  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.847083043s)
	I1004 00:44:48.569955  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.569965  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.569980  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.570056  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.595580133s)
	I1004 00:44:48.569782  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.570078  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.570088  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.570157  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.577703679s)
	I1004 00:44:48.570177  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.570178  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.570188  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.570189  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.570198  135888 addons.go:467] Verifying addon ingress=true in "addons-718830"
	I1004 00:44:48.572377  135888 out.go:177] * Verifying ingress addon...
	I1004 00:44:48.570284  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.321864311s)
	I1004 00:44:48.570399  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.171454674s)
	I1004 00:44:48.570411  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.570436  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.570457  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.570459  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.967675229s)
	I1004 00:44:48.570482  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.570505  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.570524  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.570543  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.570560  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.570579  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.572584  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.572614  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.572667  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.572687  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.569994  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.573928  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.573982  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.574008  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574026  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	W1004 00:44:48.573985  135888 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 00:44:48.574070  135888 retry.go:31] will retry after 125.914585ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1004 00:44:48.574088  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.574080  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.574111  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574120  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.574147  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.574164  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574178  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.574189  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574203  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.574180  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.574168  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.574241  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574250  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.574252  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574260  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.573947  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.574289  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.574297  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.575236  135888 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1004 00:44:48.576155  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576160  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.576175  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.576178  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576185  135888 addons.go:467] Verifying addon registry=true in "addons-718830"
	I1004 00:44:48.576195  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576212  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.576220  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.577766  135888 out.go:177] * Verifying registry addon...
	I1004 00:44:48.576429  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.577801  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.576449  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576469  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.577891  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.576485  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576502  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.577963  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.576519  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576539  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.578022  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.578034  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.576539  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.578044  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.578060  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.576555  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.576573  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.578096  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.578088  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.579553  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.578323  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.579612  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.579625  135888 addons.go:467] Verifying addon metrics-server=true in "addons-718830"
	I1004 00:44:48.578330  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.579776  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.579794  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.580476  135888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1004 00:44:48.603785  135888 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1004 00:44:48.603821  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:48.622903  135888 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1004 00:44:48.622933  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:48.637019  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.637046  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.637356  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:48.637379  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.637396  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	W1004 00:44:48.637488  135888 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1004 00:44:48.638795  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:48.639390  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:48.644093  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:48.644117  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:48.644440  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:48.644457  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:48.700305  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1004 00:44:49.324018  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:49.333635  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:49.438546  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.658040259s)
	I1004 00:44:49.438603  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:49.438603  135888 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.109573422s)
	I1004 00:44:49.438618  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:49.440153  135888 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1004 00:44:49.438904  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:49.438931  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:49.441616  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:49.441638  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:49.442869  135888 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1004 00:44:49.441654  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:49.444299  135888 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1004 00:44:49.444317  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1004 00:44:49.444605  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:49.444626  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:49.444638  135888 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-718830"
	I1004 00:44:49.444650  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:49.446148  135888 out.go:177] * Verifying csi-hostpath-driver addon...
	I1004 00:44:49.448090  135888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1004 00:44:49.478503  135888 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1004 00:44:49.478529  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1004 00:44:49.507297  135888 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 00:44:49.507397  135888 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1004 00:44:49.528210  135888 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1004 00:44:49.528236  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:49.539163  135888 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1004 00:44:49.619867  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:49.668971  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:49.670680  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:50.086052  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:50.156349  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:50.159104  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:50.159835  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:50.634658  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:50.740675  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:50.745261  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:51.129612  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:51.152276  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:51.152812  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:51.611552  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.911188579s)
	I1004 00:44:51.611635  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:51.611652  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:51.611972  135888 main.go:141] libmachine: (addons-718830) DBG | Closing plugin on server side
	I1004 00:44:51.612017  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:51.612033  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:51.612050  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:51.612074  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:51.612355  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:51.612370  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:51.637124  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:51.687156  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:51.698768  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:51.720110  135888 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.180906783s)
	I1004 00:44:51.720162  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:51.720176  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:51.720477  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:51.720497  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:51.720527  135888 main.go:141] libmachine: Making call to close driver server
	I1004 00:44:51.720543  135888 main.go:141] libmachine: (addons-718830) Calling .Close
	I1004 00:44:51.720786  135888 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:44:51.720809  135888 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:44:51.722478  135888 addons.go:467] Verifying addon gcp-auth=true in "addons-718830"
	I1004 00:44:51.724415  135888 out.go:177] * Verifying gcp-auth addon...
	I1004 00:44:51.726869  135888 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1004 00:44:51.734667  135888 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1004 00:44:51.734685  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:51.748764  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:52.133056  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:52.155429  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:52.155555  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:52.252774  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:52.570708  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:52.633188  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:52.645541  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:52.646103  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:52.752869  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:53.127328  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:53.143936  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:53.146475  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:53.266930  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:53.641478  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:53.647529  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:53.647624  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:53.757037  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:54.134781  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:54.143766  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:54.146368  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:54.254532  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:54.631023  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:54.647023  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:54.647114  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:54.753600  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:55.097236  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:55.126457  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:55.144289  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:55.145726  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:55.259118  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:55.626549  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:55.645127  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:55.645860  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:55.753342  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:56.125660  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:56.147990  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:56.153441  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:56.255373  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:56.626719  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:56.643450  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:56.645379  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:56.768764  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:57.126183  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:57.157589  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:57.158104  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:57.262989  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:57.567279  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:44:57.629323  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:57.647004  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:57.647079  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:57.753016  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:58.130714  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:58.146123  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:58.146573  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:58.253289  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:58.632287  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:58.644777  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:58.647030  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:58.754761  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:59.133893  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:59.148968  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:59.149402  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:59.253228  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:44:59.626948  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:44:59.645820  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:44:59.646298  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:44:59.754862  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:00.072206  135888 pod_ready.go:102] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"False"
	I1004 00:45:00.130820  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:00.147474  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:00.161007  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:00.255624  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:00.633692  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:00.645476  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:00.645553  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:00.758033  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:01.099899  135888 pod_ready.go:92] pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace has status "Ready":"True"
	I1004 00:45:01.099922  135888 pod_ready.go:81] duration metric: took 20.635619697s waiting for pod "coredns-5dd5756b68-jrmbv" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.099932  135888 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t2thr" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.117115  135888 pod_ready.go:97] error getting pod "coredns-5dd5756b68-t2thr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t2thr" not found
	I1004 00:45:01.117145  135888 pod_ready.go:81] duration metric: took 17.205781ms waiting for pod "coredns-5dd5756b68-t2thr" in "kube-system" namespace to be "Ready" ...
	E1004 00:45:01.117156  135888 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-t2thr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-t2thr" not found
	I1004 00:45:01.117162  135888 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.140482  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:01.152925  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:01.157118  135888 pod_ready.go:92] pod "etcd-addons-718830" in "kube-system" namespace has status "Ready":"True"
	I1004 00:45:01.157138  135888 pod_ready.go:81] duration metric: took 39.970087ms waiting for pod "etcd-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.157148  135888 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.200128  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:01.220207  135888 pod_ready.go:92] pod "kube-apiserver-addons-718830" in "kube-system" namespace has status "Ready":"True"
	I1004 00:45:01.220236  135888 pod_ready.go:81] duration metric: took 63.080079ms waiting for pod "kube-apiserver-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.220248  135888 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.248433  135888 pod_ready.go:92] pod "kube-controller-manager-addons-718830" in "kube-system" namespace has status "Ready":"True"
	I1004 00:45:01.248461  135888 pod_ready.go:81] duration metric: took 28.203568ms waiting for pod "kube-controller-manager-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.248475  135888 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7rmz2" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.262296  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:01.274712  135888 pod_ready.go:92] pod "kube-proxy-7rmz2" in "kube-system" namespace has status "Ready":"True"
	I1004 00:45:01.274748  135888 pod_ready.go:81] duration metric: took 26.263635ms waiting for pod "kube-proxy-7rmz2" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.274763  135888 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.629040  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:01.647973  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:01.648884  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:01.662742  135888 pod_ready.go:92] pod "kube-scheduler-addons-718830" in "kube-system" namespace has status "Ready":"True"
	I1004 00:45:01.662764  135888 pod_ready.go:81] duration metric: took 387.993534ms waiting for pod "kube-scheduler-addons-718830" in "kube-system" namespace to be "Ready" ...
	I1004 00:45:01.662773  135888 pod_ready.go:38] duration metric: took 21.231395674s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 00:45:01.662790  135888 api_server.go:52] waiting for apiserver process to appear ...
	I1004 00:45:01.662841  135888 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 00:45:01.724165  135888 api_server.go:72] duration metric: took 21.49167522s to wait for apiserver process to appear ...
	I1004 00:45:01.724192  135888 api_server.go:88] waiting for apiserver healthz status ...
	I1004 00:45:01.724212  135888 api_server.go:253] Checking apiserver healthz at https://192.168.39.89:8443/healthz ...
	I1004 00:45:01.737667  135888 api_server.go:279] https://192.168.39.89:8443/healthz returned 200:
	ok
	I1004 00:45:01.738902  135888 api_server.go:141] control plane version: v1.28.2
	I1004 00:45:01.738930  135888 api_server.go:131] duration metric: took 14.731544ms to wait for apiserver health ...
	I1004 00:45:01.738939  135888 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 00:45:01.752574  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:01.884199  135888 system_pods.go:59] 17 kube-system pods found
	I1004 00:45:01.884235  135888 system_pods.go:61] "coredns-5dd5756b68-jrmbv" [10f67984-920b-47b0-bb66-ac61c52fe9ae] Running
	I1004 00:45:01.884245  135888 system_pods.go:61] "csi-hostpath-attacher-0" [c4ac79d1-45a0-429e-8ca2-5a5cbafc51af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 00:45:01.884254  135888 system_pods.go:61] "csi-hostpath-resizer-0" [3f0fd37e-5742-4569-ad81-8911cf6278b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1004 00:45:01.884262  135888 system_pods.go:61] "csi-hostpathplugin-2hz4b" [fde77a86-6f3e-4eda-9ecc-09b40109d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 00:45:01.884267  135888 system_pods.go:61] "etcd-addons-718830" [013f1837-4d54-443e-891d-6ab769330dc6] Running
	I1004 00:45:01.884272  135888 system_pods.go:61] "kube-apiserver-addons-718830" [6ac3a642-3bfa-4dcb-9462-7f7caad93490] Running
	I1004 00:45:01.884276  135888 system_pods.go:61] "kube-controller-manager-addons-718830" [c01a0285-43f4-4363-ae3e-530c6f19f4a7] Running
	I1004 00:45:01.884282  135888 system_pods.go:61] "kube-ingress-dns-minikube" [1fffa363-5ec1-4287-b7ad-422e2515c6f9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1004 00:45:01.884286  135888 system_pods.go:61] "kube-proxy-7rmz2" [0a44a0ce-9fb3-418b-8978-a56f84360964] Running
	I1004 00:45:01.884290  135888 system_pods.go:61] "kube-scheduler-addons-718830" [155df9ed-43f5-4e33-83f7-1999d92a5c8f] Running
	I1004 00:45:01.884296  135888 system_pods.go:61] "metrics-server-7c66d45ddc-t2klq" [1c2b0f0d-72fe-46dd-9216-23673a621653] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 00:45:01.884303  135888 system_pods.go:61] "registry-7csqs" [198945ee-f053-4037-b249-cd1a85d4d6d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 00:45:01.884310  135888 system_pods.go:61] "registry-proxy-rtq86" [0437de54-484e-49ca-a275-2aac0f07bf3c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 00:45:01.884317  135888 system_pods.go:61] "snapshot-controller-58dbcc7b99-qcw8d" [7b92246f-38d7-4081-9132-aeef9ce05146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 00:45:01.884330  135888 system_pods.go:61] "snapshot-controller-58dbcc7b99-z2mzl" [33301f6d-5ae3-4af2-93b5-a608ac39313f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 00:45:01.884339  135888 system_pods.go:61] "storage-provisioner" [5650e99f-daeb-4b47-98e6-dec2fdcbf6ea] Running
	I1004 00:45:01.884349  135888 system_pods.go:61] "tiller-deploy-7b677967b9-xh5qq" [dcd8e248-e9c0-40fc-8ceb-baaaa18c5b9e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1004 00:45:01.884367  135888 system_pods.go:74] duration metric: took 145.419953ms to wait for pod list to return data ...
	I1004 00:45:01.884377  135888 default_sa.go:34] waiting for default service account to be created ...
	I1004 00:45:02.065253  135888 default_sa.go:45] found service account: "default"
	I1004 00:45:02.065286  135888 default_sa.go:55] duration metric: took 180.901734ms for default service account to be created ...
	I1004 00:45:02.065299  135888 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 00:45:02.126737  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:02.157665  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:02.171302  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:02.258981  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:02.275209  135888 system_pods.go:86] 17 kube-system pods found
	I1004 00:45:02.275243  135888 system_pods.go:89] "coredns-5dd5756b68-jrmbv" [10f67984-920b-47b0-bb66-ac61c52fe9ae] Running
	I1004 00:45:02.275257  135888 system_pods.go:89] "csi-hostpath-attacher-0" [c4ac79d1-45a0-429e-8ca2-5a5cbafc51af] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1004 00:45:02.275268  135888 system_pods.go:89] "csi-hostpath-resizer-0" [3f0fd37e-5742-4569-ad81-8911cf6278b5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1004 00:45:02.275279  135888 system_pods.go:89] "csi-hostpathplugin-2hz4b" [fde77a86-6f3e-4eda-9ecc-09b40109d27f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1004 00:45:02.275286  135888 system_pods.go:89] "etcd-addons-718830" [013f1837-4d54-443e-891d-6ab769330dc6] Running
	I1004 00:45:02.275294  135888 system_pods.go:89] "kube-apiserver-addons-718830" [6ac3a642-3bfa-4dcb-9462-7f7caad93490] Running
	I1004 00:45:02.275301  135888 system_pods.go:89] "kube-controller-manager-addons-718830" [c01a0285-43f4-4363-ae3e-530c6f19f4a7] Running
	I1004 00:45:02.275311  135888 system_pods.go:89] "kube-ingress-dns-minikube" [1fffa363-5ec1-4287-b7ad-422e2515c6f9] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1004 00:45:02.275320  135888 system_pods.go:89] "kube-proxy-7rmz2" [0a44a0ce-9fb3-418b-8978-a56f84360964] Running
	I1004 00:45:02.275326  135888 system_pods.go:89] "kube-scheduler-addons-718830" [155df9ed-43f5-4e33-83f7-1999d92a5c8f] Running
	I1004 00:45:02.275335  135888 system_pods.go:89] "metrics-server-7c66d45ddc-t2klq" [1c2b0f0d-72fe-46dd-9216-23673a621653] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 00:45:02.275358  135888 system_pods.go:89] "registry-7csqs" [198945ee-f053-4037-b249-cd1a85d4d6d8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1004 00:45:02.275368  135888 system_pods.go:89] "registry-proxy-rtq86" [0437de54-484e-49ca-a275-2aac0f07bf3c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1004 00:45:02.275378  135888 system_pods.go:89] "snapshot-controller-58dbcc7b99-qcw8d" [7b92246f-38d7-4081-9132-aeef9ce05146] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 00:45:02.275392  135888 system_pods.go:89] "snapshot-controller-58dbcc7b99-z2mzl" [33301f6d-5ae3-4af2-93b5-a608ac39313f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1004 00:45:02.275401  135888 system_pods.go:89] "storage-provisioner" [5650e99f-daeb-4b47-98e6-dec2fdcbf6ea] Running
	I1004 00:45:02.275408  135888 system_pods.go:89] "tiller-deploy-7b677967b9-xh5qq" [dcd8e248-e9c0-40fc-8ceb-baaaa18c5b9e] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1004 00:45:02.275421  135888 system_pods.go:126] duration metric: took 210.11486ms to wait for k8s-apps to be running ...
	I1004 00:45:02.275435  135888 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 00:45:02.275491  135888 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 00:45:02.312943  135888 system_svc.go:56] duration metric: took 37.494513ms WaitForService to wait for kubelet.
	I1004 00:45:02.312975  135888 kubeadm.go:581] duration metric: took 22.080496148s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 00:45:02.312997  135888 node_conditions.go:102] verifying NodePressure condition ...
	I1004 00:45:02.461630  135888 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 00:45:02.461671  135888 node_conditions.go:123] node cpu capacity is 2
	I1004 00:45:02.461685  135888 node_conditions.go:105] duration metric: took 148.682999ms to run NodePressure ...
	I1004 00:45:02.461701  135888 start.go:228] waiting for startup goroutines ...
	I1004 00:45:02.639724  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:02.653919  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:02.654659  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:02.758148  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:03.126267  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:03.145641  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:03.146767  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:03.267777  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:03.634131  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:03.652152  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:03.652485  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:03.754042  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:04.126445  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:04.167567  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:04.169197  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:04.254576  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:04.627162  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:04.646207  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:04.646584  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:04.753346  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:05.126522  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:05.146501  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:05.146647  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:05.255285  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:05.626356  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:05.645724  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:05.646292  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:05.759567  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:06.126731  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:06.149617  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:06.157011  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:06.255908  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:06.626065  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:06.645185  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:06.649366  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:06.755656  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:07.127043  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:07.149748  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:07.154059  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:07.253320  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:07.626871  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:07.644533  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:07.662229  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:07.752524  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:08.128194  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:08.148779  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:08.148913  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:08.254212  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:08.626805  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:08.645755  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:08.646204  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:08.753494  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:09.126508  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:09.144610  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:09.144623  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:09.253494  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:09.628323  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:09.645830  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:09.648935  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:09.753645  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:10.125955  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:10.145100  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:10.145355  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:10.252977  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:10.627260  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:10.645630  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:10.645854  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:10.757380  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:11.125556  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:11.145014  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:11.145250  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:11.253479  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:11.629862  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:11.651744  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:11.654650  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:11.753255  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:12.323868  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:12.325925  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:12.328975  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:12.329142  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:12.626417  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:12.645428  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:12.648539  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:12.758549  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:13.126780  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:13.144667  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:13.147331  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:13.257039  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:13.662241  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:13.663684  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:13.664876  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:13.752362  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:14.128913  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:14.145793  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:14.152477  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:14.262903  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:14.627148  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:14.669908  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:14.671020  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:14.766352  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:15.126709  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:15.150255  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:15.151353  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:15.255537  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:15.627945  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:15.644031  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:15.654172  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:15.753264  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:16.126825  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:16.143207  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:16.148260  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:16.255599  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:16.626536  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:16.645477  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:16.647903  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:16.753106  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:17.127176  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:17.144968  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:17.145420  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:17.253617  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:17.627516  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:17.646223  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:17.649259  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:17.752543  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:18.127426  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:18.143923  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:18.144989  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:18.254880  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:18.633915  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:18.646722  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:18.648968  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:18.754046  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:19.129589  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:19.147006  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:19.148751  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:19.259563  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:19.627341  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:19.648174  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:19.655937  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:19.760904  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:20.154339  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:20.159228  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:20.159741  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:20.253435  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:20.627147  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:20.645265  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:20.645753  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:20.754066  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:21.332046  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:21.352799  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:21.356978  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:21.373528  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:21.625693  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:21.657811  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:21.658744  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:21.753958  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:22.127561  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:22.146609  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:22.147776  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:22.254408  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:22.644285  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:22.648708  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:22.677742  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:22.753670  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:23.130171  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:23.145911  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:23.146238  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:23.253662  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:23.647602  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:23.655909  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:23.659659  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:23.752784  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:24.126392  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:24.144102  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:24.146023  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:24.253502  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:24.629828  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:24.646723  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:24.651436  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:24.753631  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:25.133835  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:25.145582  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:25.153390  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:25.253527  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:25.630621  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:25.643955  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:25.647894  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:25.753850  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:26.131467  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:26.145113  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:26.148070  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:26.253326  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:26.626987  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:26.646270  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:26.649228  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:26.753034  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:27.125922  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:27.143614  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:27.147978  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:27.253390  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:27.629657  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:27.645335  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:27.645899  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:27.757802  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:28.127014  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:28.144464  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:28.144814  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:28.450933  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:28.627069  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:28.647632  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:28.649119  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:28.753514  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:29.126930  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:29.145071  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:29.145475  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:29.253973  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:29.627147  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:29.646693  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:29.647608  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:29.753458  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:30.128369  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:30.143834  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:30.146624  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:30.253127  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:30.628070  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:30.648847  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:30.662622  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:30.752819  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:31.131429  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:31.148375  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:31.148733  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:31.257729  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:31.626715  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:31.646573  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:31.646638  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:31.752959  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:32.132044  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:32.142623  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:32.146293  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:32.252721  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:32.633983  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:32.644619  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:32.646664  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:32.753567  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:33.127658  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:33.144683  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:33.145331  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:33.255104  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:33.625828  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:33.645804  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:33.646014  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:33.752623  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:34.126570  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:34.146355  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:34.146768  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:34.255301  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:34.632389  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:34.667390  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:34.669808  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:34.754048  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:35.125585  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:35.144272  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:35.145529  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:35.253640  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:35.631464  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:35.644463  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:35.647151  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:35.753673  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:36.128319  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:36.144083  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:36.144300  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:36.252799  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:36.894720  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:36.902143  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:36.908433  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:36.908840  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:37.131577  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:37.144445  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:37.144520  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:37.252771  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:37.639844  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:37.654243  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:37.655900  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:37.760134  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:38.131540  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:38.147815  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:38.148387  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:38.261660  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:38.877713  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:38.900960  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:38.901297  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:38.901632  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:39.126269  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:39.146042  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:39.148942  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:39.256112  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:39.625873  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:39.645067  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:39.646268  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:39.752882  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:40.126625  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:40.148639  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:40.151194  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:40.253482  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:40.628240  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:40.645665  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:40.646018  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:40.754597  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:41.127068  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:41.144018  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:41.145520  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:41.255312  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:41.629887  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:41.644172  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:41.646474  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:41.753342  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:42.126921  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:42.143650  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:42.144307  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:42.253803  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:42.634023  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:42.644631  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:42.652810  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:42.753659  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:43.149318  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:43.153041  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:43.153966  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:43.257860  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:43.672457  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:43.679875  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:43.705212  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:43.759221  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:44.129168  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:44.146452  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:44.148283  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:44.258290  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:44.651798  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:44.654717  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:44.655807  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1004 00:45:44.765404  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:45.127331  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:45.145039  135888 kapi.go:107] duration metric: took 56.564560223s to wait for kubernetes.io/minikube-addons=registry ...
	I1004 00:45:45.145579  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:45.259182  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:45.627304  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:45.644944  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:45.756109  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:46.130141  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:46.146306  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:46.259174  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:46.625718  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:46.655490  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:46.753320  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:47.132203  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:47.143417  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:47.253010  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:47.626960  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:47.647439  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:47.754038  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:48.125995  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:48.144665  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:48.254564  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:48.627848  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:48.647558  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:48.752819  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:49.133549  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:49.156108  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:49.277966  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1004 00:45:49.626351  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:49.644289  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:49.754890  135888 kapi.go:107] duration metric: took 58.028018339s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1004 00:45:49.756850  135888 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-718830 cluster.
	I1004 00:45:49.758314  135888 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1004 00:45:49.759742  135888 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1004 00:45:50.125684  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:50.143639  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:50.630885  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:50.643350  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:51.344933  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:51.345134  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:51.626410  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:51.654632  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:52.128407  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:52.144913  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:52.629987  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:52.648772  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:53.134298  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:53.144248  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:53.632372  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:53.644548  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:54.130847  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:54.144854  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:54.626582  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:54.644103  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:55.126475  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:55.144655  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:55.630982  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:55.643718  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:56.126434  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:56.144439  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:56.626053  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:56.643716  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:57.127132  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:57.144007  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:57.627660  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:57.644121  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:58.126878  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:58.145772  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:58.630083  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:58.648912  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:59.135028  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:59.578982  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:45:59.631778  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:45:59.644178  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:00.126401  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:00.143913  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:00.625873  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:00.643874  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:01.127457  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:01.143335  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:01.841728  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:01.842134  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:02.126256  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:02.143891  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:02.628511  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:02.643977  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:03.126017  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:03.143917  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:03.626703  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:03.648669  135888 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1004 00:46:04.130267  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:04.143551  135888 kapi.go:107] duration metric: took 1m15.568316451s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1004 00:46:04.626621  135888 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1004 00:46:05.125667  135888 kapi.go:107] duration metric: took 1m15.677572149s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1004 00:46:05.127491  135888 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, helm-tiller, ingress-dns, metrics-server, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1004 00:46:05.128821  135888 addons.go:502] enable addons completed in 1m25.175974061s: enabled=[storage-provisioner cloud-spanner helm-tiller ingress-dns metrics-server inspektor-gadget storage-provisioner-rancher volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1004 00:46:05.128854  135888 start.go:233] waiting for cluster config update ...
	I1004 00:46:05.128876  135888 start.go:242] writing updated cluster config ...
	I1004 00:46:05.129140  135888 ssh_runner.go:195] Run: rm -f paused
	I1004 00:46:05.183452  135888 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 00:46:05.185209  135888 out.go:177] * Done! kubectl is now configured to use "addons-718830" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 00:43:54 UTC, ends at Wed 2023-10-04 00:48:49 UTC. --
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.186663996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696380529186646247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=dd535127-bf22-493e-b6ad-44eae733d389 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.187993949Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4c04b7de-753e-4064-86de-70894801f5ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.188042298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4c04b7de-753e-4064-86de-70894801f5ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.188678367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:53e3142ef0c392151146fd2f4d5241c3625cb554e338360e837581c0cd0fcbda,PodSandboxId:73eac06480548a40904933c89d57895a8e018be89905e357cf15b484954788fe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696380521046395807,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ks4w6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ff13548a-4d91-4f00-b715-ea3dc86e1a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1a624986,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97429cd5bf8ab37dbb0e3b59894fe81dbe37e76c715b5957511ffb739a6fb13e,PodSandboxId:ee1b12a52770225a95746750aaea1281333efb68cdad46df2935ad1f625d53c8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696380394085089287,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-cwm9b,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 173edb6c-8d90-4319-b4e0-8c6de3abb9ae,},An
notations:map[string]string{io.kubernetes.container.hash: de23ebbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba782bebe50926a3cd85958d6ddbdb231ab1c3babc18f1496435b939ef690fe,PodSandboxId:12bbd3f35bfa528ab625e0154f47246d8c6d290c202d99d97516e7da10757e66,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696380381124293316,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: d178caad-4b07-44be-bc0c-87060bf92e83,},Annotations:map[string]string{io.kubernetes.container.hash: 1a75629b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de781c944204919a2c2e188659148761e79612ed2a8f1e918abbaecb36931382,PodSandboxId:4954f908393cb0f64b16a92884ae9cdfd7dfde620435dd7083cae4bd98120d99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380352826415715,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-l2wg5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52dbd0d4-6ba3-4471-bb32-41f07e7582e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d03e689,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0ef9f8009000da0c47bbe31302b2a93c500b91d0000d2f83710a804070b330,PodSandboxId:c5369eb9a4d8ef0ea6d99a4af9b5db78747c521a540bc5463c741a63c82e8741,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696380348536047415,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hkkr8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4be0cecc-4711-4853-9812-8d6b08e3eaa8,},Annotations:map[string]string{io.kubernetes.container.hash: e1bcb4c8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47999763df65c7f8fdf613567d3aa16deacc66279086b71c05a86802c9dca476,PodSandboxId:2baeb8044569c47a6e4387e7984ee57da9ae5a4a246c48d268e2aea135b5b147,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380344217043156,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9pfvp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cb8dc9f0-1e8b-4b0f-b505-182ff24e3d52,},Annotations:map[string]string{io.kubernetes.container.hash: 4f746d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599c41fdac85705de3a48319e58074ea04f712822383bbc67430993d970c07fe,PodSandboxId:053cc6b94c1028333ab566ae141913efc0be193726a6b17ac6c5b57d1d626414,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696380299558728980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5650e99f-daeb-4b47-98e6-dec2fdcbf6ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac7bdf48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5227c31def0c87db305bbf52f623d6d358ec4d417e40eec7584735381059aedb,PodSandboxId:468799f8cf5cc45e3fc610724c6cb3bae8c41098744405e8071a45f6eab6594b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,St
ate:CONTAINER_RUNNING,CreatedAt:1696380293428088106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rmz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a44a0ce-9fb3-418b-8978-a56f84360964,},Annotations:map[string]string{io.kubernetes.container.hash: 22111c7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04b41fdd11790de1a421d2834887147070410fa0d6f35d9cdbffecfd6a5c7dc,PodSandboxId:ec709db732b7db410dcd20bd32714c7dbd77bababdc97f9165e103b368ed46ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
696380284126994397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrmbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f67984-920b-47b0-bb66-ac61c52fe9ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8095a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4cfbff345c5eb6e58abe0f10a09d802d1daba45d055e7d0b36b7263c03357,PodSandboxId:bfcde83b7863295ee4bd3943c05cbe8a1e134e4881027840308a2d216f8662ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f3
97e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696380259250219660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742f75fc9713831267d46ddc39031ba5,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2b494f50ebbf968154b96c34c0de6c9da5c1f9bca12f49edab705b7f5309a5,PodSandboxId:059cc9bf7591e4d09bf618ddf94e82d4278d53af404051f9e2a041f20cf3dcc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2
cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696380258980599102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d41bf5c3e2275e4847436d79d8d20e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d606cff5a24cab57d1813e15436541b26ac584b6a0005d6c56a5d3e1480f9,PodSandboxId:30c02316d2ec4c8d2319d50de5199acb9cd69f0861e48f4cc8ba75d6a5fa2363,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959b
a2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696380258618955484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270c7ef4ca6e03b349fb7337146dc92c,},Annotations:map[string]string{io.kubernetes.container.hash: d42d7342,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e02c8964e30536619d9b1c787e77785a17a9d5d8c3d74826fc39c66833b3d23,PodSandboxId:f8c3bca63ed29d51562cb7bcd507cde67e028d072fc4166c98d6a8717c9c2f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696380258688658559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2661f1f58135d0a1e284c5f05ea72481,},Annotations:map[string]string{io.kubernetes.container.hash: 65322070,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4c04b7de-753e-4064-86de-70894801f5ef name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.227915516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=562f9dee-8291-4a4a-a6c6-07252bda5e77 name=/runtime.v1.RuntimeService/Version
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.227974312Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=562f9dee-8291-4a4a-a6c6-07252bda5e77 name=/runtime.v1.RuntimeService/Version
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.229484842Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8bdb1e38-5c7f-4063-8054-f7f3e919156d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.232206952Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696380529231914980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=8bdb1e38-5c7f-4063-8054-f7f3e919156d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.237611571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=13658b81-223a-4f28-8719-d6840253d02c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.237687870Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=13658b81-223a-4f28-8719-d6840253d02c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.238002975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:53e3142ef0c392151146fd2f4d5241c3625cb554e338360e837581c0cd0fcbda,PodSandboxId:73eac06480548a40904933c89d57895a8e018be89905e357cf15b484954788fe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696380521046395807,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ks4w6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ff13548a-4d91-4f00-b715-ea3dc86e1a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1a624986,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97429cd5bf8ab37dbb0e3b59894fe81dbe37e76c715b5957511ffb739a6fb13e,PodSandboxId:ee1b12a52770225a95746750aaea1281333efb68cdad46df2935ad1f625d53c8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696380394085089287,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-cwm9b,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 173edb6c-8d90-4319-b4e0-8c6de3abb9ae,},An
notations:map[string]string{io.kubernetes.container.hash: de23ebbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba782bebe50926a3cd85958d6ddbdb231ab1c3babc18f1496435b939ef690fe,PodSandboxId:12bbd3f35bfa528ab625e0154f47246d8c6d290c202d99d97516e7da10757e66,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696380381124293316,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: d178caad-4b07-44be-bc0c-87060bf92e83,},Annotations:map[string]string{io.kubernetes.container.hash: 1a75629b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de781c944204919a2c2e188659148761e79612ed2a8f1e918abbaecb36931382,PodSandboxId:4954f908393cb0f64b16a92884ae9cdfd7dfde620435dd7083cae4bd98120d99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380352826415715,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-l2wg5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52dbd0d4-6ba3-4471-bb32-41f07e7582e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d03e689,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0ef9f8009000da0c47bbe31302b2a93c500b91d0000d2f83710a804070b330,PodSandboxId:c5369eb9a4d8ef0ea6d99a4af9b5db78747c521a540bc5463c741a63c82e8741,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696380348536047415,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hkkr8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4be0cecc-4711-4853-9812-8d6b08e3eaa8,},Annotations:map[string]string{io.kubernetes.container.hash: e1bcb4c8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47999763df65c7f8fdf613567d3aa16deacc66279086b71c05a86802c9dca476,PodSandboxId:2baeb8044569c47a6e4387e7984ee57da9ae5a4a246c48d268e2aea135b5b147,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380344217043156,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9pfvp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cb8dc9f0-1e8b-4b0f-b505-182ff24e3d52,},Annotations:map[string]string{io.kubernetes.container.hash: 4f746d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599c41fdac85705de3a48319e58074ea04f712822383bbc67430993d970c07fe,PodSandboxId:053cc6b94c1028333ab566ae141913efc0be193726a6b17ac6c5b57d1d626414,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696380299558728980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5650e99f-daeb-4b47-98e6-dec2fdcbf6ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac7bdf48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5227c31def0c87db305bbf52f623d6d358ec4d417e40eec7584735381059aedb,PodSandboxId:468799f8cf5cc45e3fc610724c6cb3bae8c41098744405e8071a45f6eab6594b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,St
ate:CONTAINER_RUNNING,CreatedAt:1696380293428088106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rmz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a44a0ce-9fb3-418b-8978-a56f84360964,},Annotations:map[string]string{io.kubernetes.container.hash: 22111c7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04b41fdd11790de1a421d2834887147070410fa0d6f35d9cdbffecfd6a5c7dc,PodSandboxId:ec709db732b7db410dcd20bd32714c7dbd77bababdc97f9165e103b368ed46ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
696380284126994397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrmbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f67984-920b-47b0-bb66-ac61c52fe9ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8095a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4cfbff345c5eb6e58abe0f10a09d802d1daba45d055e7d0b36b7263c03357,PodSandboxId:bfcde83b7863295ee4bd3943c05cbe8a1e134e4881027840308a2d216f8662ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f3
97e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696380259250219660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742f75fc9713831267d46ddc39031ba5,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2b494f50ebbf968154b96c34c0de6c9da5c1f9bca12f49edab705b7f5309a5,PodSandboxId:059cc9bf7591e4d09bf618ddf94e82d4278d53af404051f9e2a041f20cf3dcc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2
cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696380258980599102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d41bf5c3e2275e4847436d79d8d20e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d606cff5a24cab57d1813e15436541b26ac584b6a0005d6c56a5d3e1480f9,PodSandboxId:30c02316d2ec4c8d2319d50de5199acb9cd69f0861e48f4cc8ba75d6a5fa2363,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959b
a2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696380258618955484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270c7ef4ca6e03b349fb7337146dc92c,},Annotations:map[string]string{io.kubernetes.container.hash: d42d7342,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e02c8964e30536619d9b1c787e77785a17a9d5d8c3d74826fc39c66833b3d23,PodSandboxId:f8c3bca63ed29d51562cb7bcd507cde67e028d072fc4166c98d6a8717c9c2f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696380258688658559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2661f1f58135d0a1e284c5f05ea72481,},Annotations:map[string]string{io.kubernetes.container.hash: 65322070,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=13658b81-223a-4f28-8719-d6840253d02c name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.278414857Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0b429f59-c4d4-4b71-ba1e-4245e19435bc name=/runtime.v1.RuntimeService/Version
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.278523151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0b429f59-c4d4-4b71-ba1e-4245e19435bc name=/runtime.v1.RuntimeService/Version
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.280131079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f597aa04-c8fb-47a1-8250-394e824cc886 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.281461008Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696380529281442959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=f597aa04-c8fb-47a1-8250-394e824cc886 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.282086563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6db8aea0-3ac9-41fd-8997-f1c2b6889535 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.282200044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6db8aea0-3ac9-41fd-8997-f1c2b6889535 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.282523425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:53e3142ef0c392151146fd2f4d5241c3625cb554e338360e837581c0cd0fcbda,PodSandboxId:73eac06480548a40904933c89d57895a8e018be89905e357cf15b484954788fe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696380521046395807,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ks4w6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ff13548a-4d91-4f00-b715-ea3dc86e1a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1a624986,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97429cd5bf8ab37dbb0e3b59894fe81dbe37e76c715b5957511ffb739a6fb13e,PodSandboxId:ee1b12a52770225a95746750aaea1281333efb68cdad46df2935ad1f625d53c8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696380394085089287,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-cwm9b,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 173edb6c-8d90-4319-b4e0-8c6de3abb9ae,},An
notations:map[string]string{io.kubernetes.container.hash: de23ebbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba782bebe50926a3cd85958d6ddbdb231ab1c3babc18f1496435b939ef690fe,PodSandboxId:12bbd3f35bfa528ab625e0154f47246d8c6d290c202d99d97516e7da10757e66,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696380381124293316,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: d178caad-4b07-44be-bc0c-87060bf92e83,},Annotations:map[string]string{io.kubernetes.container.hash: 1a75629b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de781c944204919a2c2e188659148761e79612ed2a8f1e918abbaecb36931382,PodSandboxId:4954f908393cb0f64b16a92884ae9cdfd7dfde620435dd7083cae4bd98120d99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380352826415715,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-l2wg5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52dbd0d4-6ba3-4471-bb32-41f07e7582e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d03e689,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0ef9f8009000da0c47bbe31302b2a93c500b91d0000d2f83710a804070b330,PodSandboxId:c5369eb9a4d8ef0ea6d99a4af9b5db78747c521a540bc5463c741a63c82e8741,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696380348536047415,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hkkr8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4be0cecc-4711-4853-9812-8d6b08e3eaa8,},Annotations:map[string]string{io.kubernetes.container.hash: e1bcb4c8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47999763df65c7f8fdf613567d3aa16deacc66279086b71c05a86802c9dca476,PodSandboxId:2baeb8044569c47a6e4387e7984ee57da9ae5a4a246c48d268e2aea135b5b147,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380344217043156,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9pfvp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cb8dc9f0-1e8b-4b0f-b505-182ff24e3d52,},Annotations:map[string]string{io.kubernetes.container.hash: 4f746d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599c41fdac85705de3a48319e58074ea04f712822383bbc67430993d970c07fe,PodSandboxId:053cc6b94c1028333ab566ae141913efc0be193726a6b17ac6c5b57d1d626414,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696380299558728980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5650e99f-daeb-4b47-98e6-dec2fdcbf6ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac7bdf48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5227c31def0c87db305bbf52f623d6d358ec4d417e40eec7584735381059aedb,PodSandboxId:468799f8cf5cc45e3fc610724c6cb3bae8c41098744405e8071a45f6eab6594b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,St
ate:CONTAINER_RUNNING,CreatedAt:1696380293428088106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rmz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a44a0ce-9fb3-418b-8978-a56f84360964,},Annotations:map[string]string{io.kubernetes.container.hash: 22111c7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04b41fdd11790de1a421d2834887147070410fa0d6f35d9cdbffecfd6a5c7dc,PodSandboxId:ec709db732b7db410dcd20bd32714c7dbd77bababdc97f9165e103b368ed46ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
696380284126994397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrmbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f67984-920b-47b0-bb66-ac61c52fe9ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8095a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4cfbff345c5eb6e58abe0f10a09d802d1daba45d055e7d0b36b7263c03357,PodSandboxId:bfcde83b7863295ee4bd3943c05cbe8a1e134e4881027840308a2d216f8662ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f3
97e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696380259250219660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742f75fc9713831267d46ddc39031ba5,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2b494f50ebbf968154b96c34c0de6c9da5c1f9bca12f49edab705b7f5309a5,PodSandboxId:059cc9bf7591e4d09bf618ddf94e82d4278d53af404051f9e2a041f20cf3dcc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2
cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696380258980599102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d41bf5c3e2275e4847436d79d8d20e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d606cff5a24cab57d1813e15436541b26ac584b6a0005d6c56a5d3e1480f9,PodSandboxId:30c02316d2ec4c8d2319d50de5199acb9cd69f0861e48f4cc8ba75d6a5fa2363,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959b
a2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696380258618955484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270c7ef4ca6e03b349fb7337146dc92c,},Annotations:map[string]string{io.kubernetes.container.hash: d42d7342,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e02c8964e30536619d9b1c787e77785a17a9d5d8c3d74826fc39c66833b3d23,PodSandboxId:f8c3bca63ed29d51562cb7bcd507cde67e028d072fc4166c98d6a8717c9c2f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696380258688658559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2661f1f58135d0a1e284c5f05ea72481,},Annotations:map[string]string{io.kubernetes.container.hash: 65322070,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6db8aea0-3ac9-41fd-8997-f1c2b6889535 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.320092688Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9ee73c07-7de6-401f-b23a-3e5fa7da4542 name=/runtime.v1.RuntimeService/Version
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.320148934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9ee73c07-7de6-401f-b23a-3e5fa7da4542 name=/runtime.v1.RuntimeService/Version
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.321629703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5e0ba269-37bc-4a53-bd8f-99fea911fe37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.322791207Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696380529322776573,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:506255,},InodesUsed:&UInt64Value{Value:214,},},},}" file="go-grpc-middleware/chain.go:25" id=5e0ba269-37bc-4a53-bd8f-99fea911fe37 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.323557013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4ec8330-c85b-4e49-bfed-17201e5c89a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.323610782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e4ec8330-c85b-4e49-bfed-17201e5c89a3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 00:48:49 addons-718830 crio[709]: time="2023-10-04 00:48:49.324072903Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:53e3142ef0c392151146fd2f4d5241c3625cb554e338360e837581c0cd0fcbda,PodSandboxId:73eac06480548a40904933c89d57895a8e018be89905e357cf15b484954788fe,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696380521046395807,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ks4w6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: ff13548a-4d91-4f00-b715-ea3dc86e1a07,},Annotations:map[string]string{io.kubernetes.container.hash: 1a624986,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97429cd5bf8ab37dbb0e3b59894fe81dbe37e76c715b5957511ffb739a6fb13e,PodSandboxId:ee1b12a52770225a95746750aaea1281333efb68cdad46df2935ad1f625d53c8,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c,State:CONTAINER_RUNNING,CreatedAt:1696380394085089287,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-58b88cff49-cwm9b,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 173edb6c-8d90-4319-b4e0-8c6de3abb9ae,},An
notations:map[string]string{io.kubernetes.container.hash: de23ebbd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ba782bebe50926a3cd85958d6ddbdb231ab1c3babc18f1496435b939ef690fe,PodSandboxId:12bbd3f35bfa528ab625e0154f47246d8c6d290c202d99d97516e7da10757e66,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696380381124293316,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: d178caad-4b07-44be-bc0c-87060bf92e83,},Annotations:map[string]string{io.kubernetes.container.hash: 1a75629b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de781c944204919a2c2e188659148761e79612ed2a8f1e918abbaecb36931382,PodSandboxId:4954f908393cb0f64b16a92884ae9cdfd7dfde620435dd7083cae4bd98120d99,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380352826415715,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-l2wg5,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52dbd0d4-6ba3-4471-bb32-41f07e7582e6,},Annotations:map[string]string{io.kubernetes.container.hash: 2d03e689,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0ef9f8009000da0c47bbe31302b2a93c500b91d0000d2f83710a804070b330,PodSandboxId:c5369eb9a4d8ef0ea6d99a4af9b5db78747c521a540bc5463c741a63c82e8741,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1696380348536047415,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-hkkr8,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 4be0cecc-4711-4853-9812-8d6b08e3eaa8,},Annotations:map[string]string{io.kubernetes.container.hash: e1bcb4c8,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47999763df65c7f8fdf613567d3aa16deacc66279086b71c05a86802c9dca476,PodSandboxId:2baeb8044569c47a6e4387e7984ee57da9ae5a4a246c48d268e2aea135b5b147,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf
80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1696380344217043156,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-9pfvp,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cb8dc9f0-1e8b-4b0f-b505-182ff24e3d52,},Annotations:map[string]string{io.kubernetes.container.hash: 4f746d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:599c41fdac85705de3a48319e58074ea04f712822383bbc67430993d970c07fe,PodSandboxId:053cc6b94c1028333ab566ae141913efc0be193726a6b17ac6c5b57d1d626414,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696380299558728980,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5650e99f-daeb-4b47-98e6-dec2fdcbf6ea,},Annotations:map[string]string{io.kubernetes.container.hash: ac7bdf48,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5227c31def0c87db305bbf52f623d6d358ec4d417e40eec7584735381059aedb,PodSandboxId:468799f8cf5cc45e3fc610724c6cb3bae8c41098744405e8071a45f6eab6594b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,St
ate:CONTAINER_RUNNING,CreatedAt:1696380293428088106,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7rmz2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0a44a0ce-9fb3-418b-8978-a56f84360964,},Annotations:map[string]string{io.kubernetes.container.hash: 22111c7b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04b41fdd11790de1a421d2834887147070410fa0d6f35d9cdbffecfd6a5c7dc,PodSandboxId:ec709db732b7db410dcd20bd32714c7dbd77bababdc97f9165e103b368ed46ed,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
696380284126994397,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jrmbv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10f67984-920b-47b0-bb66-ac61c52fe9ae,},Annotations:map[string]string{io.kubernetes.container.hash: 7e8095a5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab4cfbff345c5eb6e58abe0f10a09d802d1daba45d055e7d0b36b7263c03357,PodSandboxId:bfcde83b7863295ee4bd3943c05cbe8a1e134e4881027840308a2d216f8662ec,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f3
97e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696380259250219660,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 742f75fc9713831267d46ddc39031ba5,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a2b494f50ebbf968154b96c34c0de6c9da5c1f9bca12f49edab705b7f5309a5,PodSandboxId:059cc9bf7591e4d09bf618ddf94e82d4278d53af404051f9e2a041f20cf3dcc6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2
cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696380258980599102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66d41bf5c3e2275e4847436d79d8d20e,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee3d606cff5a24cab57d1813e15436541b26ac584b6a0005d6c56a5d3e1480f9,PodSandboxId:30c02316d2ec4c8d2319d50de5199acb9cd69f0861e48f4cc8ba75d6a5fa2363,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959b
a2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696380258618955484,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 270c7ef4ca6e03b349fb7337146dc92c,},Annotations:map[string]string{io.kubernetes.container.hash: d42d7342,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e02c8964e30536619d9b1c787e77785a17a9d5d8c3d74826fc39c66833b3d23,PodSandboxId:f8c3bca63ed29d51562cb7bcd507cde67e028d072fc4166c98d6a8717c9c2f48,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696380258688658559,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-718830,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2661f1f58135d0a1e284c5f05ea72481,},Annotations:map[string]string{io.kubernetes.container.hash: 65322070,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e4ec8330-c85b-4e49-bfed-17201e5c89a3 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	53e3142ef0c39       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6                      8 seconds ago       Running             hello-world-app           0                   73eac06480548       hello-world-app-5d77478584-ks4w6
	97429cd5bf8ab       ghcr.io/headlamp-k8s/headlamp@sha256:669910dd0cb38640a44cb93fedebaffb8f971131576ee13b2fee27a784e7503c                        2 minutes ago       Running             headlamp                  0                   ee1b12a527702       headlamp-58b88cff49-cwm9b
	6ba782bebe509       docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14                              2 minutes ago       Running             nginx                     0                   12bbd3f35bfa5       nginx
	de781c9442049       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                             2 minutes ago       Exited              patch                     2                   4954f908393cb       ingress-nginx-admission-patch-l2wg5
	bb0ef9f800900       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 3 minutes ago       Running             gcp-auth                  0                   c5369eb9a4d8e       gcp-auth-d4c87556c-hkkr8
	47999763df65c       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   2baeb8044569c       ingress-nginx-admission-create-9pfvp
	599c41fdac857       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   053cc6b94c102       storage-provisioner
	5227c31def0c8       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                                             3 minutes ago       Running             kube-proxy                0                   468799f8cf5cc       kube-proxy-7rmz2
	a04b41fdd1179       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   ec709db732b7d       coredns-5dd5756b68-jrmbv
	9ab4cfbff345c       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                                             4 minutes ago       Running             kube-scheduler            0                   bfcde83b78632       kube-scheduler-addons-718830
	7a2b494f50ebb       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                                             4 minutes ago       Running             kube-controller-manager   0                   059cc9bf7591e       kube-controller-manager-addons-718830
	3e02c8964e305       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                                             4 minutes ago       Running             kube-apiserver            0                   f8c3bca63ed29       kube-apiserver-addons-718830
	ee3d606cff5a2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   30c02316d2ec4       etcd-addons-718830
	
	* 
	* ==> coredns [a04b41fdd11790de1a421d2834887147070410fa0d6f35d9cdbffecfd6a5c7dc] <==
	* [INFO] 10.244.0.6:59956 - 19604 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177645s
	[INFO] 10.244.0.6:45682 - 3109 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000052189s
	[INFO] 10.244.0.6:45682 - 34080 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000023641s
	[INFO] 10.244.0.6:40894 - 53530 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039807s
	[INFO] 10.244.0.6:40894 - 15385 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038662s
	[INFO] 10.244.0.6:43526 - 31276 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085157s
	[INFO] 10.244.0.6:43526 - 24878 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037503s
	[INFO] 10.244.0.6:58386 - 55957 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074159s
	[INFO] 10.244.0.6:58386 - 47257 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000148889s
	[INFO] 10.244.0.6:42982 - 14595 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000098687s
	[INFO] 10.244.0.6:42982 - 47872 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091055s
	[INFO] 10.244.0.6:46346 - 48885 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081089s
	[INFO] 10.244.0.6:46346 - 49399 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000455s
	[INFO] 10.244.0.6:45724 - 54967 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000061428s
	[INFO] 10.244.0.6:45724 - 62641 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000026657s
	[INFO] 10.244.0.18:56111 - 36545 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000272408s
	[INFO] 10.244.0.18:60550 - 17629 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000098509s
	[INFO] 10.244.0.18:35861 - 61882 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000096322s
	[INFO] 10.244.0.18:35265 - 51469 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092779s
	[INFO] 10.244.0.18:54456 - 45947 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000281929s
	[INFO] 10.244.0.18:42180 - 20492 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000074099s
	[INFO] 10.244.0.18:45115 - 45969 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00054551s
	[INFO] 10.244.0.18:49015 - 33233 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.00039076s
	[INFO] 10.244.0.22:39247 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00033853s
	[INFO] 10.244.0.22:37023 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000477462s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-718830
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-718830
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=addons-718830
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T00_44_26_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-718830
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:44:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-718830
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 00:48:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 00:47:00 +0000   Wed, 04 Oct 2023 00:44:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 00:47:00 +0000   Wed, 04 Oct 2023 00:44:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 00:47:00 +0000   Wed, 04 Oct 2023 00:44:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 00:47:00 +0000   Wed, 04 Oct 2023 00:44:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    addons-718830
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 f08fba0cf4d24b169dfeccba19d9d758
	  System UUID:                f08fba0c-f4d2-4b16-9dfe-ccba19d9d758
	  Boot ID:                    a0b5ad15-e0c3-4351-8b05-c0b70f423916
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-ks4w6         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-hkkr8                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  headlamp                    headlamp-58b88cff49-cwm9b                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 coredns-5dd5756b68-jrmbv                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m10s
	  kube-system                 etcd-addons-718830                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-apiserver-addons-718830             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-controller-manager-addons-718830    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 kube-proxy-7rmz2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-addons-718830             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m52s  kube-proxy       
	  Normal  Starting                 4m23s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s  kubelet          Node addons-718830 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s  kubelet          Node addons-718830 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s  kubelet          Node addons-718830 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m22s  kubelet          Node addons-718830 status is now: NodeReady
	  Normal  RegisteredNode           4m11s  node-controller  Node addons-718830 event: Registered Node addons-718830 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.099766] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.441993] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.515631] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154280] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.981246] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 4 00:44] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.125777] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.147906] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.110657] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.204881] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[ +10.069069] systemd-fstab-generator[906]: Ignoring "noauto" for root device
	[  +9.287489] systemd-fstab-generator[1237]: Ignoring "noauto" for root device
	[ +25.634757] kauditd_printk_skb: 59 callbacks suppressed
	[Oct 4 00:45] kauditd_printk_skb: 16 callbacks suppressed
	[  +8.860203] kauditd_printk_skb: 14 callbacks suppressed
	[Oct 4 00:46] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.608132] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.616376] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.650069] kauditd_printk_skb: 8 callbacks suppressed
	[  +6.996184] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 4 00:47] kauditd_printk_skb: 16 callbacks suppressed
	[Oct 4 00:48] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [ee3d606cff5a24cab57d1813e15436541b26ac584b6a0005d6c56a5d3e1480f9] <==
	* {"level":"info","ts":"2023-10-04T00:45:59.582577Z","caller":"traceutil/trace.go:171","msg":"trace[647907512] transaction","detail":"{read_only:false; response_revision:1076; number_of_response:1; }","duration":"419.704783ms","start":"2023-10-04T00:45:59.162859Z","end":"2023-10-04T00:45:59.582564Z","steps":["trace[647907512] 'process raft request'  (duration: 419.392349ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:45:59.582774Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T00:45:59.16284Z","time spent":"419.823272ms","remote":"127.0.0.1:32846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-718830\" mod_revision:1025 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-718830\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-718830\" > >"}
	{"level":"warn","ts":"2023-10-04T00:45:59.58297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"329.291673ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T00:45:59.583021Z","caller":"traceutil/trace.go:171","msg":"trace[221122163] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1076; }","duration":"329.34291ms","start":"2023-10-04T00:45:59.253669Z","end":"2023-10-04T00:45:59.583012Z","steps":["trace[221122163] 'agreement among raft nodes before linearized reading'  (duration: 329.245ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:45:59.58306Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T00:45:59.253656Z","time spent":"329.398316ms","remote":"127.0.0.1:32874","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-04T00:46:01.830803Z","caller":"traceutil/trace.go:171","msg":"trace[2129331419] linearizableReadLoop","detail":"{readStateIndex:1115; appliedIndex:1114; }","duration":"262.185037ms","start":"2023-10-04T00:46:01.568604Z","end":"2023-10-04T00:46:01.830789Z","steps":["trace[2129331419] 'read index received'  (duration: 261.994435ms)","trace[2129331419] 'applied index is now lower than readState.Index'  (duration: 189.999µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T00:46:01.830953Z","caller":"traceutil/trace.go:171","msg":"trace[243521299] transaction","detail":"{read_only:false; response_revision:1078; number_of_response:1; }","duration":"346.131027ms","start":"2023-10-04T00:46:01.484813Z","end":"2023-10-04T00:46:01.830945Z","steps":["trace[243521299] 'process raft request'  (duration: 345.798749ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:46:01.83112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.012933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-10-04T00:46:01.831184Z","caller":"traceutil/trace.go:171","msg":"trace[759062330] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1078; }","duration":"229.083481ms","start":"2023-10-04T00:46:01.602091Z","end":"2023-10-04T00:46:01.831174Z","steps":["trace[759062330] 'agreement among raft nodes before linearized reading'  (duration: 228.982651ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:46:01.831241Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T00:46:01.484798Z","time spent":"346.194956ms","remote":"127.0.0.1:32846","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1065 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2023-10-04T00:46:01.831391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.802003ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2023-10-04T00:46:01.831465Z","caller":"traceutil/trace.go:171","msg":"trace[1228462549] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1078; }","duration":"262.877635ms","start":"2023-10-04T00:46:01.568581Z","end":"2023-10-04T00:46:01.831458Z","steps":["trace[1228462549] 'agreement among raft nodes before linearized reading'  (duration: 262.715982ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:46:01.831616Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.410151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13841"}
	{"level":"info","ts":"2023-10-04T00:46:01.831633Z","caller":"traceutil/trace.go:171","msg":"trace[1550258962] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1078; }","duration":"193.428754ms","start":"2023-10-04T00:46:01.638199Z","end":"2023-10-04T00:46:01.831628Z","steps":["trace[1550258962] 'agreement among raft nodes before linearized reading'  (duration: 193.365227ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:46:01.831838Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.23959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78368"}
	{"level":"info","ts":"2023-10-04T00:46:01.831853Z","caller":"traceutil/trace.go:171","msg":"trace[1554179606] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1078; }","duration":"213.256651ms","start":"2023-10-04T00:46:01.618592Z","end":"2023-10-04T00:46:01.831849Z","steps":["trace[1554179606] 'agreement among raft nodes before linearized reading'  (duration: 213.15483ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T00:46:01.986279Z","caller":"traceutil/trace.go:171","msg":"trace[1270745556] transaction","detail":"{read_only:false; response_revision:1079; number_of_response:1; }","duration":"147.29847ms","start":"2023-10-04T00:46:01.838966Z","end":"2023-10-04T00:46:01.986264Z","steps":["trace[1270745556] 'process raft request'  (duration: 144.53794ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T00:46:01.989445Z","caller":"traceutil/trace.go:171","msg":"trace[879094360] transaction","detail":"{read_only:false; response_revision:1080; number_of_response:1; }","duration":"149.833739ms","start":"2023-10-04T00:46:01.8396Z","end":"2023-10-04T00:46:01.989433Z","steps":["trace[879094360] 'process raft request'  (duration: 149.679816ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T00:46:19.799081Z","caller":"traceutil/trace.go:171","msg":"trace[935617630] transaction","detail":"{read_only:false; response_revision:1245; number_of_response:1; }","duration":"171.514644ms","start":"2023-10-04T00:46:19.62755Z","end":"2023-10-04T00:46:19.799064Z","steps":["trace[935617630] 'process raft request'  (duration: 171.206166ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T00:46:20.049063Z","caller":"traceutil/trace.go:171","msg":"trace[1469620543] transaction","detail":"{read_only:false; response_revision:1246; number_of_response:1; }","duration":"114.3323ms","start":"2023-10-04T00:46:19.934712Z","end":"2023-10-04T00:46:20.049045Z","steps":["trace[1469620543] 'process raft request'  (duration: 114.022651ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T00:46:44.669209Z","caller":"traceutil/trace.go:171","msg":"trace[1623497595] linearizableReadLoop","detail":"{readStateIndex:1454; appliedIndex:1453; }","duration":"175.611527ms","start":"2023-10-04T00:46:44.493583Z","end":"2023-10-04T00:46:44.669194Z","steps":["trace[1623497595] 'read index received'  (duration: 175.460204ms)","trace[1623497595] 'applied index is now lower than readState.Index'  (duration: 150.619µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T00:46:44.669667Z","caller":"traceutil/trace.go:171","msg":"trace[281566023] transaction","detail":"{read_only:false; response_revision:1403; number_of_response:1; }","duration":"227.204817ms","start":"2023-10-04T00:46:44.442451Z","end":"2023-10-04T00:46:44.669656Z","steps":["trace[281566023] 'process raft request'  (duration: 226.647504ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T00:46:44.669968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.319896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:1 size:636"}
	{"level":"info","ts":"2023-10-04T00:46:44.670235Z","caller":"traceutil/trace.go:171","msg":"trace[2036348003] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:1; response_revision:1403; }","duration":"176.665263ms","start":"2023-10-04T00:46:44.49356Z","end":"2023-10-04T00:46:44.670225Z","steps":["trace[2036348003] 'agreement among raft nodes before linearized reading'  (duration: 176.302051ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T00:47:21.25928Z","caller":"traceutil/trace.go:171","msg":"trace[1572187463] transaction","detail":"{read_only:false; response_revision:1657; number_of_response:1; }","duration":"264.768042ms","start":"2023-10-04T00:47:20.99446Z","end":"2023-10-04T00:47:21.259228Z","steps":["trace[1572187463] 'process raft request'  (duration: 264.385805ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [bb0ef9f8009000da0c47bbe31302b2a93c500b91d0000d2f83710a804070b330] <==
	* 2023/10/04 00:45:48 GCP Auth Webhook started!
	2023/10/04 00:46:10 Ready to marshal response ...
	2023/10/04 00:46:10 Ready to write response ...
	2023/10/04 00:46:14 Ready to marshal response ...
	2023/10/04 00:46:14 Ready to write response ...
	2023/10/04 00:46:15 Ready to marshal response ...
	2023/10/04 00:46:15 Ready to write response ...
	2023/10/04 00:46:16 Ready to marshal response ...
	2023/10/04 00:46:16 Ready to write response ...
	2023/10/04 00:46:16 Ready to marshal response ...
	2023/10/04 00:46:16 Ready to write response ...
	2023/10/04 00:46:26 Ready to marshal response ...
	2023/10/04 00:46:26 Ready to write response ...
	2023/10/04 00:46:27 Ready to marshal response ...
	2023/10/04 00:46:27 Ready to write response ...
	2023/10/04 00:46:27 Ready to marshal response ...
	2023/10/04 00:46:27 Ready to write response ...
	2023/10/04 00:46:38 Ready to marshal response ...
	2023/10/04 00:46:38 Ready to write response ...
	2023/10/04 00:46:38 Ready to marshal response ...
	2023/10/04 00:46:38 Ready to write response ...
	2023/10/04 00:46:56 Ready to marshal response ...
	2023/10/04 00:46:56 Ready to write response ...
	2023/10/04 00:48:38 Ready to marshal response ...
	2023/10/04 00:48:38 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  00:48:49 up 5 min,  0 users,  load average: 1.19, 1.84, 0.94
	Linux addons-718830 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3e02c8964e30536619d9b1c787e77785a17a9d5d8c3d74826fc39c66833b3d23] <==
	* I1004 00:46:15.939543       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1004 00:46:27.008677       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.209.179"}
	I1004 00:46:53.030918       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1004 00:46:54.619858       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E1004 00:46:54.914718       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc007c7ced0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc008b7f180), ResponseWriter:(*httpsnoop.rw)(0xc008b7f180), Flusher:(*httpsnoop.rw)(0xc008b7f180), CloseNotifier:(*httpsnoop.rw)(0xc008b7f180), Pusher:(*httpsnoop.rw)(0xc008b7f180)}}, encoder:(*versioning.codec)(0xc007288280), memAllocator:(*runtime.Allocator)(0xc00619c498)})
	I1004 00:47:14.751424       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.752258       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.765847       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.765928       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.785979       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.786063       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.798789       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.800843       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.807295       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.807832       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.808366       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.808425       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.843853       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.843933       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1004 00:47:14.845288       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1004 00:47:14.846121       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1004 00:47:15.808108       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1004 00:47:15.845509       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1004 00:47:15.866830       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1004 00:48:38.513386       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.102.17.34"}
	
	* 
	* ==> kube-controller-manager [7a2b494f50ebbf968154b96c34c0de6c9da5c1f9bca12f49edab705b7f5309a5] <==
	* E1004 00:47:48.923475       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:47:55.286006       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:47:55.286114       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:47:57.629453       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:47:57.629575       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:48:03.359915       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:48:03.359973       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:48:22.447576       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:48:22.447603       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:48:28.554444       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:48:28.554718       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1004 00:48:31.438490       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:48:31.438656       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1004 00:48:38.246137       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1004 00:48:38.295120       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-ks4w6"
	I1004 00:48:38.305422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="59.55312ms"
	I1004 00:48:38.344040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.5154ms"
	I1004 00:48:38.344282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="101.237µs"
	I1004 00:48:41.348479       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1004 00:48:41.353052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="9.933µs"
	I1004 00:48:41.360467       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1004 00:48:41.510526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="11.111105ms"
	I1004 00:48:41.513705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="37.49µs"
	W1004 00:48:45.961861       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1004 00:48:45.961953       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [5227c31def0c87db305bbf52f623d6d358ec4d417e40eec7584735381059aedb] <==
	* I1004 00:44:56.133120       1 server_others.go:69] "Using iptables proxy"
	I1004 00:44:56.490524       1 node.go:141] Successfully retrieved node IP: 192.168.39.89
	I1004 00:44:57.162568       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 00:44:57.162616       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 00:44:57.295812       1 server_others.go:152] "Using iptables Proxier"
	I1004 00:44:57.295917       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 00:44:57.296199       1 server.go:846] "Version info" version="v1.28.2"
	I1004 00:44:57.296251       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 00:44:57.479442       1 config.go:188] "Starting service config controller"
	I1004 00:44:57.479536       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 00:44:57.479564       1 config.go:97] "Starting endpoint slice config controller"
	I1004 00:44:57.479567       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 00:44:57.480233       1 config.go:315] "Starting node config controller"
	I1004 00:44:57.480285       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 00:44:57.587946       1 shared_informer.go:318] Caches are synced for node config
	I1004 00:44:57.588200       1 shared_informer.go:318] Caches are synced for service config
	I1004 00:44:57.588491       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [9ab4cfbff345c5eb6e58abe0f10a09d802d1daba45d055e7d0b36b7263c03357] <==
	* W1004 00:44:23.167379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:44:23.167393       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 00:44:23.167444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 00:44:23.167457       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 00:44:23.168358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:44:23.168403       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:44:24.015132       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:44:24.015230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 00:44:24.068637       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:44:24.068815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 00:44:24.076401       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 00:44:24.076551       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 00:44:24.141934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:44:24.142051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 00:44:24.270931       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:44:24.271052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 00:44:24.324428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 00:44:24.324544       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 00:44:24.332201       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:44:24.332465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 00:44:24.351892       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 00:44:24.352011       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 00:44:24.383880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 00:44:24.383946       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1004 00:44:27.046056       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:43:54 UTC, ends at Wed 2023-10-04 00:48:49 UTC. --
	Oct 04 00:48:38 addons-718830 kubelet[1244]: I1004 00:48:38.309959    1244 memory_manager.go:346] "RemoveStaleState removing state" podUID="7b92246f-38d7-4081-9132-aeef9ce05146" containerName="volume-snapshot-controller"
	Oct 04 00:48:38 addons-718830 kubelet[1244]: I1004 00:48:38.309965    1244 memory_manager.go:346] "RemoveStaleState removing state" podUID="fde77a86-6f3e-4eda-9ecc-09b40109d27f" containerName="liveness-probe"
	Oct 04 00:48:38 addons-718830 kubelet[1244]: I1004 00:48:38.374593    1244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ff13548a-4d91-4f00-b715-ea3dc86e1a07-gcp-creds\") pod \"hello-world-app-5d77478584-ks4w6\" (UID: \"ff13548a-4d91-4f00-b715-ea3dc86e1a07\") " pod="default/hello-world-app-5d77478584-ks4w6"
	Oct 04 00:48:38 addons-718830 kubelet[1244]: I1004 00:48:38.374677    1244 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g9q5\" (UniqueName: \"kubernetes.io/projected/ff13548a-4d91-4f00-b715-ea3dc86e1a07-kube-api-access-5g9q5\") pod \"hello-world-app-5d77478584-ks4w6\" (UID: \"ff13548a-4d91-4f00-b715-ea3dc86e1a07\") " pod="default/hello-world-app-5d77478584-ks4w6"
	Oct 04 00:48:39 addons-718830 kubelet[1244]: I1004 00:48:39.786555    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6k5\" (UniqueName: \"kubernetes.io/projected/1fffa363-5ec1-4287-b7ad-422e2515c6f9-kube-api-access-9z6k5\") pod \"1fffa363-5ec1-4287-b7ad-422e2515c6f9\" (UID: \"1fffa363-5ec1-4287-b7ad-422e2515c6f9\") "
	Oct 04 00:48:39 addons-718830 kubelet[1244]: I1004 00:48:39.792162    1244 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fffa363-5ec1-4287-b7ad-422e2515c6f9-kube-api-access-9z6k5" (OuterVolumeSpecName: "kube-api-access-9z6k5") pod "1fffa363-5ec1-4287-b7ad-422e2515c6f9" (UID: "1fffa363-5ec1-4287-b7ad-422e2515c6f9"). InnerVolumeSpecName "kube-api-access-9z6k5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 00:48:39 addons-718830 kubelet[1244]: I1004 00:48:39.887873    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9z6k5\" (UniqueName: \"kubernetes.io/projected/1fffa363-5ec1-4287-b7ad-422e2515c6f9-kube-api-access-9z6k5\") on node \"addons-718830\" DevicePath \"\""
	Oct 04 00:48:40 addons-718830 kubelet[1244]: I1004 00:48:40.464568    1244 scope.go:117] "RemoveContainer" containerID="237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf"
	Oct 04 00:48:40 addons-718830 kubelet[1244]: I1004 00:48:40.505638    1244 scope.go:117] "RemoveContainer" containerID="237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf"
	Oct 04 00:48:40 addons-718830 kubelet[1244]: E1004 00:48:40.507247    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf\": container with ID starting with 237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf not found: ID does not exist" containerID="237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf"
	Oct 04 00:48:40 addons-718830 kubelet[1244]: I1004 00:48:40.507291    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf"} err="failed to get container status \"237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf\": rpc error: code = NotFound desc = could not find container \"237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf\": container with ID starting with 237d3c03046a2c323bbf04a41d956fae589679de6de564ec38183e4770d8aaaf not found: ID does not exist"
	Oct 04 00:48:40 addons-718830 kubelet[1244]: I1004 00:48:40.792062    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1fffa363-5ec1-4287-b7ad-422e2515c6f9" path="/var/lib/kubelet/pods/1fffa363-5ec1-4287-b7ad-422e2515c6f9/volumes"
	Oct 04 00:48:42 addons-718830 kubelet[1244]: I1004 00:48:42.774037    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="52dbd0d4-6ba3-4471-bb32-41f07e7582e6" path="/var/lib/kubelet/pods/52dbd0d4-6ba3-4471-bb32-41f07e7582e6/volumes"
	Oct 04 00:48:42 addons-718830 kubelet[1244]: I1004 00:48:42.774877    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cb8dc9f0-1e8b-4b0f-b505-182ff24e3d52" path="/var/lib/kubelet/pods/cb8dc9f0-1e8b-4b0f-b505-182ff24e3d52/volumes"
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.724097    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krcs9\" (UniqueName: \"kubernetes.io/projected/39923767-8d0d-4214-bd6c-39d53d196a9a-kube-api-access-krcs9\") pod \"39923767-8d0d-4214-bd6c-39d53d196a9a\" (UID: \"39923767-8d0d-4214-bd6c-39d53d196a9a\") "
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.724184    1244 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39923767-8d0d-4214-bd6c-39d53d196a9a-webhook-cert\") pod \"39923767-8d0d-4214-bd6c-39d53d196a9a\" (UID: \"39923767-8d0d-4214-bd6c-39d53d196a9a\") "
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.728529    1244 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39923767-8d0d-4214-bd6c-39d53d196a9a-kube-api-access-krcs9" (OuterVolumeSpecName: "kube-api-access-krcs9") pod "39923767-8d0d-4214-bd6c-39d53d196a9a" (UID: "39923767-8d0d-4214-bd6c-39d53d196a9a"). InnerVolumeSpecName "kube-api-access-krcs9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.729438    1244 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39923767-8d0d-4214-bd6c-39d53d196a9a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "39923767-8d0d-4214-bd6c-39d53d196a9a" (UID: "39923767-8d0d-4214-bd6c-39d53d196a9a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.771970    1244 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="39923767-8d0d-4214-bd6c-39d53d196a9a" path="/var/lib/kubelet/pods/39923767-8d0d-4214-bd6c-39d53d196a9a/volumes"
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.824715    1244 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39923767-8d0d-4214-bd6c-39d53d196a9a-webhook-cert\") on node \"addons-718830\" DevicePath \"\""
	Oct 04 00:48:44 addons-718830 kubelet[1244]: I1004 00:48:44.824783    1244 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-krcs9\" (UniqueName: \"kubernetes.io/projected/39923767-8d0d-4214-bd6c-39d53d196a9a-kube-api-access-krcs9\") on node \"addons-718830\" DevicePath \"\""
	Oct 04 00:48:45 addons-718830 kubelet[1244]: I1004 00:48:45.500590    1244 scope.go:117] "RemoveContainer" containerID="41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7"
	Oct 04 00:48:45 addons-718830 kubelet[1244]: I1004 00:48:45.520249    1244 scope.go:117] "RemoveContainer" containerID="41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7"
	Oct 04 00:48:45 addons-718830 kubelet[1244]: E1004 00:48:45.520872    1244 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7\": container with ID starting with 41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7 not found: ID does not exist" containerID="41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7"
	Oct 04 00:48:45 addons-718830 kubelet[1244]: I1004 00:48:45.520938    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7"} err="failed to get container status \"41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7\": rpc error: code = NotFound desc = could not find container \"41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7\": container with ID starting with 41f39461053c919ec4257a982328e152a5c3887cfbf0cb6658fb841524c5a1c7 not found: ID does not exist"
	
	* 
	* ==> storage-provisioner [599c41fdac85705de3a48319e58074ea04f712822383bbc67430993d970c07fe] <==
	* I1004 00:45:01.534158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:45:01.673794       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:45:01.673933       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:45:01.721473       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:45:01.721728       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-718830_9d2765fa-62a3-4180-b078-c3578db559bf!
	I1004 00:45:01.737122       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aab51b0c-6dfb-4e44-80ee-be41be520627", APIVersion:"v1", ResourceVersion:"815", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-718830_9d2765fa-62a3-4180-b078-c3578db559bf became leader
	I1004 00:45:01.822896       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-718830_9d2765fa-62a3-4180-b078-c3578db559bf!
	E1004 00:46:38.563225       1 controller.go:1050] claim "48c55315-2a94-4604-a9bc-b609ad992d89" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-718830 -n addons-718830
helpers_test.go:261: (dbg) Run:  kubectl --context addons-718830 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (159.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-718830
addons_test.go:150: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-718830: exit status 82 (2m1.32294603s)

                                                
                                                
-- stdout --
	* Stopping node "addons-718830"  ...
	* Stopping node "addons-718830"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:152: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-718830" : exit status 82
addons_test.go:154: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-718830
addons_test.go:154: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-718830: exit status 11 (21.700786733s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.89:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:156: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-718830" : exit status 11
addons_test.go:158: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-718830
addons_test.go:158: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-718830: exit status 11 (6.143198367s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.89:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:160: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-718830" : exit status 11
addons_test.go:163: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-718830
addons_test.go:163: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-718830: exit status 11 (6.146591631s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.89:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:165: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-718830" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.231850186s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-398727
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image load --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image load --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr: (8.926771936s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image ls: (2.253644704s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-398727" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (12.43s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (178.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-533597 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-533597 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.272916908s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-533597 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-533597 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e2db4a68-cb2d-46e7-a035-b1952849bb0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e2db4a68-cb2d-46e7-a035-b1952849bb0a] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.031236815s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1004 00:58:49.040561  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:00:33.291308  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.296589  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.306843  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.327108  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.367390  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.447709  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.608124  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:33.928715  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:34.569690  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:35.850192  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:38.410999  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:43.531407  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:00:53.772606  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
addons_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-533597 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.571980421s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:256: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-533597 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.57
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons disable ingress-dns --alsologtostderr -v=1: (8.912969204s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons disable ingress --alsologtostderr -v=1
E1004 01:01:05.195129  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons disable ingress --alsologtostderr -v=1: (7.581925486s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-533597 -n ingress-addon-legacy-533597
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533597 logs -n 25: (1.146584949s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-398727 ssh findmnt        | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| service        | functional-398727 service            | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| ssh            | functional-398727 ssh findmnt        | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| start          | -p functional-398727                 | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=kvm2                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| mount          | -p functional-398727                 | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | -p functional-398727                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| start          | -p functional-398727                 | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| update-context | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-398727 ssh pgrep          | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-398727 image build -t     | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | localhost/my-image:functional-398727 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-398727                    | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-398727 image ls           | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	| delete         | -p functional-398727                 | functional-398727           | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:56 UTC |
	| start          | -p ingress-addon-legacy-533597       | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 00:56 UTC | 04 Oct 23 00:57 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-533597          | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 00:57 UTC | 04 Oct 23 00:58 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-533597          | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 00:58 UTC | 04 Oct 23 00:58 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-533597          | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 00:58 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-533597 ip       | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 01:00 UTC | 04 Oct 23 01:00 UTC |
	| addons         | ingress-addon-legacy-533597          | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 01:00 UTC | 04 Oct 23 01:01 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-533597          | ingress-addon-legacy-533597 | jenkins | v1.31.2 | 04 Oct 23 01:01 UTC | 04 Oct 23 01:01 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 00:56:39
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 00:56:39.528697  143960 out.go:296] Setting OutFile to fd 1 ...
	I1004 00:56:39.528956  143960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:56:39.528966  143960 out.go:309] Setting ErrFile to fd 2...
	I1004 00:56:39.528970  143960 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:56:39.529197  143960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 00:56:39.529811  143960 out.go:303] Setting JSON to false
	I1004 00:56:39.530695  143960 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5951,"bootTime":1696375049,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 00:56:39.530753  143960 start.go:138] virtualization: kvm guest
	I1004 00:56:39.533028  143960 out.go:177] * [ingress-addon-legacy-533597] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 00:56:39.534386  143960 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 00:56:39.535686  143960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 00:56:39.534485  143960 notify.go:220] Checking for updates...
	I1004 00:56:39.538327  143960 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:56:39.539660  143960 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:56:39.541099  143960 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 00:56:39.542612  143960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 00:56:39.544194  143960 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 00:56:39.585656  143960 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 00:56:39.587253  143960 start.go:298] selected driver: kvm2
	I1004 00:56:39.587271  143960 start.go:902] validating driver "kvm2" against <nil>
	I1004 00:56:39.587288  143960 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 00:56:39.587916  143960 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:56:39.588003  143960 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 00:56:39.604008  143960 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 00:56:39.604088  143960 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 00:56:39.604266  143960 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 00:56:39.604302  143960 cni.go:84] Creating CNI manager for ""
	I1004 00:56:39.604313  143960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:56:39.604321  143960 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 00:56:39.604330  143960 start_flags.go:321] config:
	{Name:ingress-addon-legacy-533597 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-533597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:56:39.604835  143960 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:56:39.606752  143960 out.go:177] * Starting control plane node ingress-addon-legacy-533597 in cluster ingress-addon-legacy-533597
	I1004 00:56:39.608078  143960 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1004 00:56:39.634772  143960 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1004 00:56:39.634812  143960 cache.go:57] Caching tarball of preloaded images
	I1004 00:56:39.634944  143960 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1004 00:56:39.636871  143960 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1004 00:56:39.638286  143960 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:56:39.673495  143960 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1004 00:56:43.010202  143960 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:56:43.010292  143960 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:56:43.999593  143960 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1004 00:56:43.999930  143960 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/config.json ...
	I1004 00:56:43.999962  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/config.json: {Name:mk44d97871fcf7522ea3fb4b1fe978719b312121 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:56:44.000161  143960 start.go:365] acquiring machines lock for ingress-addon-legacy-533597: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 00:56:44.000199  143960 start.go:369] acquired machines lock for "ingress-addon-legacy-533597" in 20.925µs
	I1004 00:56:44.000219  143960 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-533597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-533597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 00:56:44.000281  143960 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 00:56:44.003428  143960 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1004 00:56:44.003619  143960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:56:44.003672  143960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:56:44.018257  143960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I1004 00:56:44.018709  143960 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:56:44.019272  143960 main.go:141] libmachine: Using API Version  1
	I1004 00:56:44.019301  143960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:56:44.019653  143960 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:56:44.019894  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetMachineName
	I1004 00:56:44.020091  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:56:44.020253  143960 start.go:159] libmachine.API.Create for "ingress-addon-legacy-533597" (driver="kvm2")
	I1004 00:56:44.020279  143960 client.go:168] LocalClient.Create starting
	I1004 00:56:44.020315  143960 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 00:56:44.020355  143960 main.go:141] libmachine: Decoding PEM data...
	I1004 00:56:44.020370  143960 main.go:141] libmachine: Parsing certificate...
	I1004 00:56:44.020426  143960 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 00:56:44.020447  143960 main.go:141] libmachine: Decoding PEM data...
	I1004 00:56:44.020456  143960 main.go:141] libmachine: Parsing certificate...
	I1004 00:56:44.020473  143960 main.go:141] libmachine: Running pre-create checks...
	I1004 00:56:44.020493  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .PreCreateCheck
	I1004 00:56:44.020793  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetConfigRaw
	I1004 00:56:44.021188  143960 main.go:141] libmachine: Creating machine...
	I1004 00:56:44.021203  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .Create
	I1004 00:56:44.021322  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Creating KVM machine...
	I1004 00:56:44.022750  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found existing default KVM network
	I1004 00:56:44.023435  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:44.023303  143997 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015b40}
	I1004 00:56:44.028597  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | trying to create private KVM network mk-ingress-addon-legacy-533597 192.168.39.0/24...
	I1004 00:56:44.097031  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597 ...
	I1004 00:56:44.097078  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 00:56:44.097092  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | private KVM network mk-ingress-addon-legacy-533597 192.168.39.0/24 created
	I1004 00:56:44.097117  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:44.096940  143997 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:56:44.097136  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 00:56:44.322551  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:44.322426  143997 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa...
	I1004 00:56:44.429694  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:44.429547  143997 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/ingress-addon-legacy-533597.rawdisk...
	I1004 00:56:44.429726  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Writing magic tar header
	I1004 00:56:44.429741  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Writing SSH key tar header
	I1004 00:56:44.429802  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:44.429751  143997 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597 ...
	I1004 00:56:44.429938  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597
	I1004 00:56:44.429962  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 00:56:44.429980  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597 (perms=drwx------)
	I1004 00:56:44.430002  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 00:56:44.430010  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 00:56:44.430019  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 00:56:44.430030  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 00:56:44.430046  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:56:44.430066  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 00:56:44.430082  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 00:56:44.430097  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home/jenkins
	I1004 00:56:44.430112  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 00:56:44.430120  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Checking permissions on dir: /home
	I1004 00:56:44.430133  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Skipping /home - not owner
	I1004 00:56:44.430152  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Creating domain...
	I1004 00:56:44.431180  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) define libvirt domain using xml: 
	I1004 00:56:44.431200  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) <domain type='kvm'>
	I1004 00:56:44.431208  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <name>ingress-addon-legacy-533597</name>
	I1004 00:56:44.431215  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <memory unit='MiB'>4096</memory>
	I1004 00:56:44.431222  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <vcpu>2</vcpu>
	I1004 00:56:44.431227  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <features>
	I1004 00:56:44.431236  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <acpi/>
	I1004 00:56:44.431241  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <apic/>
	I1004 00:56:44.431247  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <pae/>
	I1004 00:56:44.431253  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     
	I1004 00:56:44.431260  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   </features>
	I1004 00:56:44.431267  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <cpu mode='host-passthrough'>
	I1004 00:56:44.431273  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   
	I1004 00:56:44.431280  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   </cpu>
	I1004 00:56:44.431294  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <os>
	I1004 00:56:44.431311  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <type>hvm</type>
	I1004 00:56:44.431325  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <boot dev='cdrom'/>
	I1004 00:56:44.431340  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <boot dev='hd'/>
	I1004 00:56:44.431354  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <bootmenu enable='no'/>
	I1004 00:56:44.431363  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   </os>
	I1004 00:56:44.431370  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   <devices>
	I1004 00:56:44.431379  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <disk type='file' device='cdrom'>
	I1004 00:56:44.431393  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/boot2docker.iso'/>
	I1004 00:56:44.431406  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <target dev='hdc' bus='scsi'/>
	I1004 00:56:44.431421  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <readonly/>
	I1004 00:56:44.431437  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </disk>
	I1004 00:56:44.431453  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <disk type='file' device='disk'>
	I1004 00:56:44.431465  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 00:56:44.431480  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/ingress-addon-legacy-533597.rawdisk'/>
	I1004 00:56:44.431488  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <target dev='hda' bus='virtio'/>
	I1004 00:56:44.431501  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </disk>
	I1004 00:56:44.431517  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <interface type='network'>
	I1004 00:56:44.431535  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <source network='mk-ingress-addon-legacy-533597'/>
	I1004 00:56:44.431549  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <model type='virtio'/>
	I1004 00:56:44.431558  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </interface>
	I1004 00:56:44.431564  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <interface type='network'>
	I1004 00:56:44.431572  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <source network='default'/>
	I1004 00:56:44.431579  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <model type='virtio'/>
	I1004 00:56:44.431587  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </interface>
	I1004 00:56:44.431595  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <serial type='pty'>
	I1004 00:56:44.431603  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <target port='0'/>
	I1004 00:56:44.431609  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </serial>
	I1004 00:56:44.431617  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <console type='pty'>
	I1004 00:56:44.431644  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <target type='serial' port='0'/>
	I1004 00:56:44.431664  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </console>
	I1004 00:56:44.431674  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     <rng model='virtio'>
	I1004 00:56:44.431689  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)       <backend model='random'>/dev/random</backend>
	I1004 00:56:44.431699  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     </rng>
	I1004 00:56:44.431705  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     
	I1004 00:56:44.431714  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)     
	I1004 00:56:44.431722  143960 main.go:141] libmachine: (ingress-addon-legacy-533597)   </devices>
	I1004 00:56:44.431729  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) </domain>
	I1004 00:56:44.431737  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) 
	I1004 00:56:44.436256  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:8c:87:95 in network default
	I1004 00:56:44.436839  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Ensuring networks are active...
	I1004 00:56:44.436857  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:44.437544  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Ensuring network default is active
	I1004 00:56:44.437943  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Ensuring network mk-ingress-addon-legacy-533597 is active
	I1004 00:56:44.438522  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Getting domain xml...
	I1004 00:56:44.439267  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Creating domain...
	I1004 00:56:45.687314  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Waiting to get IP...
	I1004 00:56:45.688190  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:45.688667  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:45.688695  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:45.688646  143997 retry.go:31] will retry after 201.492215ms: waiting for machine to come up
	I1004 00:56:45.892222  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:45.892694  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:45.892733  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:45.892630  143997 retry.go:31] will retry after 358.253717ms: waiting for machine to come up
	I1004 00:56:46.252093  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:46.252532  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:46.252567  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:46.252462  143997 retry.go:31] will retry after 340.029482ms: waiting for machine to come up
	I1004 00:56:46.593664  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:46.594144  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:46.594182  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:46.594087  143997 retry.go:31] will retry after 389.674179ms: waiting for machine to come up
	I1004 00:56:46.985791  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:46.986284  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:46.986317  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:46.986248  143997 retry.go:31] will retry after 588.965953ms: waiting for machine to come up
	I1004 00:56:47.576927  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:47.577357  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:47.577391  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:47.577295  143997 retry.go:31] will retry after 864.491968ms: waiting for machine to come up
	I1004 00:56:48.443489  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:48.443857  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:48.443881  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:48.443818  143997 retry.go:31] will retry after 998.219008ms: waiting for machine to come up
	I1004 00:56:49.443242  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:49.443613  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:49.443646  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:49.443565  143997 retry.go:31] will retry after 961.126372ms: waiting for machine to come up
	I1004 00:56:50.406717  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:50.407064  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:50.407098  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:50.407008  143997 retry.go:31] will retry after 1.401211489s: waiting for machine to come up
	I1004 00:56:51.810559  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:51.810864  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:51.810904  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:51.810813  143997 retry.go:31] will retry after 1.457400213s: waiting for machine to come up
	I1004 00:56:53.270668  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:53.271027  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:53.271052  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:53.270994  143997 retry.go:31] will retry after 2.628641695s: waiting for machine to come up
	I1004 00:56:55.900945  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:55.901385  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:55.901424  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:55.901321  143997 retry.go:31] will retry after 2.365237051s: waiting for machine to come up
	I1004 00:56:58.270038  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:56:58.270414  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:56:58.270436  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:56:58.270373  143997 retry.go:31] will retry after 4.475365682s: waiting for machine to come up
	I1004 00:57:02.750041  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:02.750534  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find current IP address of domain ingress-addon-legacy-533597 in network mk-ingress-addon-legacy-533597
	I1004 00:57:02.750570  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | I1004 00:57:02.750474  143997 retry.go:31] will retry after 4.268375873s: waiting for machine to come up
	I1004 00:57:07.023422  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:07.023907  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Found IP for machine: 192.168.39.57
	I1004 00:57:07.023936  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Reserving static IP address...
	I1004 00:57:07.023954  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has current primary IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:07.024241  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-533597", mac: "52:54:00:d6:92:33", ip: "192.168.39.57"} in network mk-ingress-addon-legacy-533597
	I1004 00:57:07.096213  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Getting to WaitForSSH function...
	I1004 00:57:07.096246  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Reserved static IP address: 192.168.39.57
	I1004 00:57:07.096260  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Waiting for SSH to be available...
	I1004 00:57:07.098968  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:07.099294  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597
	I1004 00:57:07.099328  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | unable to find defined IP address of network mk-ingress-addon-legacy-533597 interface with MAC address 52:54:00:d6:92:33
	I1004 00:57:07.099431  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Using SSH client type: external
	I1004 00:57:07.099463  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa (-rw-------)
	I1004 00:57:07.099496  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 00:57:07.099510  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | About to run SSH command:
	I1004 00:57:07.099519  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | exit 0
	I1004 00:57:07.103549  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | SSH cmd err, output: exit status 255: 
	I1004 00:57:07.103568  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 00:57:07.103577  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | command : exit 0
	I1004 00:57:07.103583  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | err     : exit status 255
	I1004 00:57:07.103591  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | output  : 
	I1004 00:57:10.104183  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Getting to WaitForSSH function...
	I1004 00:57:10.106651  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.107055  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.107096  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.107157  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Using SSH client type: external
	I1004 00:57:10.107179  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa (-rw-------)
	I1004 00:57:10.107231  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 00:57:10.107257  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | About to run SSH command:
	I1004 00:57:10.107267  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | exit 0
	I1004 00:57:10.206414  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | SSH cmd err, output: <nil>: 
	I1004 00:57:10.206648  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) KVM machine creation complete!
	I1004 00:57:10.207024  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetConfigRaw
	I1004 00:57:10.213170  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:10.213419  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:10.213582  143960 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 00:57:10.213603  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetState
	I1004 00:57:10.215096  143960 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 00:57:10.215114  143960 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 00:57:10.215120  143960 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 00:57:10.215127  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:10.217159  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.217562  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.217600  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.217688  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:10.217858  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.218010  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.218130  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:10.218336  143960 main.go:141] libmachine: Using SSH client type: native
	I1004 00:57:10.218677  143960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1004 00:57:10.218688  143960 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 00:57:10.349412  143960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 00:57:10.349439  143960 main.go:141] libmachine: Detecting the provisioner...
	I1004 00:57:10.349448  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:10.352328  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.352636  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.352674  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.352830  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:10.353049  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.353212  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.353441  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:10.353678  143960 main.go:141] libmachine: Using SSH client type: native
	I1004 00:57:10.354039  143960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1004 00:57:10.354052  143960 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 00:57:10.487128  143960 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 00:57:10.487224  143960 main.go:141] libmachine: found compatible host: buildroot
	I1004 00:57:10.487233  143960 main.go:141] libmachine: Provisioning with buildroot...
	I1004 00:57:10.487242  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetMachineName
	I1004 00:57:10.487609  143960 buildroot.go:166] provisioning hostname "ingress-addon-legacy-533597"
	I1004 00:57:10.487638  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetMachineName
	I1004 00:57:10.487804  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:10.490550  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.490928  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.490960  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.491088  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:10.491312  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.491498  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.491685  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:10.491873  143960 main.go:141] libmachine: Using SSH client type: native
	I1004 00:57:10.492196  143960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1004 00:57:10.492217  143960 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-533597 && echo "ingress-addon-legacy-533597" | sudo tee /etc/hostname
	I1004 00:57:10.634541  143960 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-533597
	
	I1004 00:57:10.634571  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:10.637221  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.637518  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.637546  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.637667  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:10.637897  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.638082  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:10.638214  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:10.638363  143960 main.go:141] libmachine: Using SSH client type: native
	I1004 00:57:10.638690  143960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1004 00:57:10.638711  143960 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-533597' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-533597/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-533597' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 00:57:10.779183  143960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 00:57:10.779228  143960 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 00:57:10.779255  143960 buildroot.go:174] setting up certificates
	I1004 00:57:10.779269  143960 provision.go:83] configureAuth start
	I1004 00:57:10.779285  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetMachineName
	I1004 00:57:10.779554  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetIP
	I1004 00:57:10.782442  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.782740  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.782779  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.782959  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:10.785306  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.785643  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:10.785683  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:10.785826  143960 provision.go:138] copyHostCerts
	I1004 00:57:10.785879  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 00:57:10.785921  143960 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 00:57:10.785934  143960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 00:57:10.786012  143960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 00:57:10.786111  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 00:57:10.786139  143960 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 00:57:10.786150  143960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 00:57:10.786191  143960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 00:57:10.786258  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 00:57:10.786281  143960 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 00:57:10.786291  143960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 00:57:10.786328  143960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 00:57:10.786394  143960 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-533597 san=[192.168.39.57 192.168.39.57 localhost 127.0.0.1 minikube ingress-addon-legacy-533597]
	I1004 00:57:11.064153  143960 provision.go:172] copyRemoteCerts
	I1004 00:57:11.064221  143960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 00:57:11.064255  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:11.067139  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.067471  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.067508  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.067710  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:11.067969  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.068138  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:11.068283  143960 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa Username:docker}
	I1004 00:57:11.163457  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 00:57:11.163538  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 00:57:11.187671  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 00:57:11.187740  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1004 00:57:11.212765  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 00:57:11.212838  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 00:57:11.235769  143960 provision.go:86] duration metric: configureAuth took 456.482446ms
	I1004 00:57:11.235807  143960 buildroot.go:189] setting minikube options for container-runtime
	I1004 00:57:11.235993  143960 config.go:182] Loaded profile config "ingress-addon-legacy-533597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1004 00:57:11.236070  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:11.238999  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.239451  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.239495  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.239673  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:11.239907  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.240108  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.240272  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:11.240441  143960 main.go:141] libmachine: Using SSH client type: native
	I1004 00:57:11.240889  143960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1004 00:57:11.240916  143960 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 00:57:11.561509  143960 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 00:57:11.561542  143960 main.go:141] libmachine: Checking connection to Docker...
	I1004 00:57:11.561552  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetURL
	I1004 00:57:11.562983  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Using libvirt version 6000000
	I1004 00:57:11.565402  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.565814  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.565867  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.565990  143960 main.go:141] libmachine: Docker is up and running!
	I1004 00:57:11.566005  143960 main.go:141] libmachine: Reticulating splines...
	I1004 00:57:11.566013  143960 client.go:171] LocalClient.Create took 27.545726068s
	I1004 00:57:11.566038  143960 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-533597" took 27.545786085s
	I1004 00:57:11.566050  143960 start.go:300] post-start starting for "ingress-addon-legacy-533597" (driver="kvm2")
	I1004 00:57:11.566062  143960 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 00:57:11.566086  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:11.566343  143960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 00:57:11.566373  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:11.568702  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.569039  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.569071  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.569249  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:11.569449  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.569614  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:11.569749  143960 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa Username:docker}
	I1004 00:57:11.663989  143960 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 00:57:11.668215  143960 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 00:57:11.668238  143960 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 00:57:11.668303  143960 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 00:57:11.668398  143960 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 00:57:11.668413  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /etc/ssl/certs/1355652.pem
	I1004 00:57:11.668547  143960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 00:57:11.678497  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 00:57:11.700649  143960 start.go:303] post-start completed in 134.582784ms
	I1004 00:57:11.700707  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetConfigRaw
	I1004 00:57:11.701442  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetIP
	I1004 00:57:11.704231  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.704557  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.704594  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.704889  143960 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/config.json ...
	I1004 00:57:11.705111  143960 start.go:128] duration metric: createHost completed in 27.704820661s
	I1004 00:57:11.705145  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:11.707574  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.707925  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.707966  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.708104  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:11.708296  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.708462  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.708562  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:11.708663  143960 main.go:141] libmachine: Using SSH client type: native
	I1004 00:57:11.708982  143960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I1004 00:57:11.708994  143960 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 00:57:11.838762  143960 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696381031.824409104
	
	I1004 00:57:11.838783  143960 fix.go:206] guest clock: 1696381031.824409104
	I1004 00:57:11.838790  143960 fix.go:219] Guest: 2023-10-04 00:57:11.824409104 +0000 UTC Remote: 2023-10-04 00:57:11.705122794 +0000 UTC m=+32.205709896 (delta=119.28631ms)
	I1004 00:57:11.838848  143960 fix.go:190] guest clock delta is within tolerance: 119.28631ms
	I1004 00:57:11.838856  143960 start.go:83] releasing machines lock for "ingress-addon-legacy-533597", held for 27.83864596s
	I1004 00:57:11.838880  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:11.839233  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetIP
	I1004 00:57:11.841712  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.842026  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.842080  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.842225  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:11.842662  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:11.842850  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:11.842943  143960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 00:57:11.842999  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:11.843048  143960 ssh_runner.go:195] Run: cat /version.json
	I1004 00:57:11.843074  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:11.845413  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.845619  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.845697  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.845728  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.845830  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:11.846023  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.846070  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:11.846102  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:11.846199  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:11.846330  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:11.846414  143960 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa Username:docker}
	I1004 00:57:11.846472  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:11.846619  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:11.846759  143960 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa Username:docker}
	I1004 00:57:11.934544  143960 ssh_runner.go:195] Run: systemctl --version
	I1004 00:57:11.960220  143960 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 00:57:12.119487  143960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 00:57:12.125312  143960 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 00:57:12.125382  143960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 00:57:12.141393  143960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 00:57:12.141421  143960 start.go:469] detecting cgroup driver to use...
	I1004 00:57:12.141489  143960 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 00:57:12.156520  143960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 00:57:12.169333  143960 docker.go:197] disabling cri-docker service (if available) ...
	I1004 00:57:12.169387  143960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 00:57:12.182729  143960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 00:57:12.196210  143960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 00:57:12.310673  143960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 00:57:12.429043  143960 docker.go:213] disabling docker service ...
	I1004 00:57:12.429155  143960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 00:57:12.443130  143960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 00:57:12.455131  143960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 00:57:12.564682  143960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 00:57:12.671124  143960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 00:57:12.684061  143960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 00:57:12.701185  143960 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1004 00:57:12.701260  143960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:57:12.710441  143960 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 00:57:12.710516  143960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:57:12.719642  143960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:57:12.728604  143960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 00:57:12.737863  143960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 00:57:12.747549  143960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 00:57:12.755676  143960 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 00:57:12.755731  143960 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 00:57:12.768331  143960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 00:57:12.777403  143960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 00:57:12.903602  143960 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 00:57:13.086516  143960 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 00:57:13.086601  143960 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 00:57:13.091621  143960 start.go:537] Will wait 60s for crictl version
	I1004 00:57:13.091704  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:13.095935  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 00:57:13.131536  143960 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 00:57:13.131658  143960 ssh_runner.go:195] Run: crio --version
	I1004 00:57:13.178723  143960 ssh_runner.go:195] Run: crio --version
	I1004 00:57:13.224310  143960 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1004 00:57:13.225817  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetIP
	I1004 00:57:13.228763  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:13.229135  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:13.229171  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:13.229346  143960 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 00:57:13.233348  143960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 00:57:13.244803  143960 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1004 00:57:13.244853  143960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 00:57:13.283083  143960 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1004 00:57:13.283154  143960 ssh_runner.go:195] Run: which lz4
	I1004 00:57:13.286784  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 00:57:13.286870  143960 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 00:57:13.291139  143960 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 00:57:13.291173  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1004 00:57:15.246659  143960 crio.go:444] Took 1.959801 seconds to copy over tarball
	I1004 00:57:15.246735  143960 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 00:57:18.516948  143960 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.27017487s)
	I1004 00:57:18.516989  143960 crio.go:451] Took 3.270297 seconds to extract the tarball
	I1004 00:57:18.517002  143960 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 00:57:18.561380  143960 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 00:57:18.618430  143960 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1004 00:57:18.618468  143960 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 00:57:18.618521  143960 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 00:57:18.618570  143960 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1004 00:57:18.618589  143960 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1004 00:57:18.618628  143960 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1004 00:57:18.618533  143960 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1004 00:57:18.618829  143960 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1004 00:57:18.618833  143960 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1004 00:57:18.618862  143960 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1004 00:57:18.619955  143960 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1004 00:57:18.619996  143960 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1004 00:57:18.619963  143960 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1004 00:57:18.619963  143960 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 00:57:18.619965  143960 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1004 00:57:18.619967  143960 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1004 00:57:18.619964  143960 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1004 00:57:18.619969  143960 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1004 00:57:18.776034  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1004 00:57:18.776177  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1004 00:57:18.781496  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1004 00:57:18.786331  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1004 00:57:18.802555  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1004 00:57:18.827231  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1004 00:57:18.870968  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1004 00:57:18.876815  143960 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1004 00:57:18.876838  143960 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1004 00:57:18.876863  143960 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1004 00:57:18.876863  143960 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1004 00:57:18.876906  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.876911  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.928913  143960 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1004 00:57:18.928959  143960 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1004 00:57:18.929008  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.929005  143960 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1004 00:57:18.929044  143960 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1004 00:57:18.929086  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.956352  143960 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1004 00:57:18.956398  143960 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1004 00:57:18.956460  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.958417  143960 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1004 00:57:18.958459  143960 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1004 00:57:18.958506  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.973212  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1004 00:57:18.973290  143960 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1004 00:57:18.973336  143960 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1004 00:57:18.973341  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1004 00:57:18.973375  143960 ssh_runner.go:195] Run: which crictl
	I1004 00:57:18.973379  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1004 00:57:18.973471  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1004 00:57:18.973510  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1004 00:57:18.973547  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1004 00:57:19.094491  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1004 00:57:19.094542  143960 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1004 00:57:19.122528  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1004 00:57:19.122560  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1004 00:57:19.122599  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1004 00:57:19.122667  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1004 00:57:19.122803  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1004 00:57:19.142158  143960 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1004 00:57:19.225863  143960 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 00:57:19.370836  143960 cache_images.go:92] LoadImages completed in 752.345177ms
	W1004 00:57:19.370949  143960 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1004 00:57:19.371021  143960 ssh_runner.go:195] Run: crio config
	I1004 00:57:19.432586  143960 cni.go:84] Creating CNI manager for ""
	I1004 00:57:19.432611  143960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:57:19.432631  143960 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 00:57:19.432652  143960 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-533597 NodeName:ingress-addon-legacy-533597 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1004 00:57:19.432840  143960 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-533597"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 00:57:19.432958  143960 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-533597 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-533597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 00:57:19.433035  143960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1004 00:57:19.443129  143960 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 00:57:19.443197  143960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 00:57:19.451872  143960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1004 00:57:19.467236  143960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1004 00:57:19.482610  143960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1004 00:57:19.497953  143960 ssh_runner.go:195] Run: grep 192.168.39.57	control-plane.minikube.internal$ /etc/hosts
	I1004 00:57:19.501640  143960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.57	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 00:57:19.513413  143960 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597 for IP: 192.168.39.57
	I1004 00:57:19.513446  143960 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:19.513622  143960 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 00:57:19.513660  143960 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 00:57:19.513701  143960 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.key
	I1004 00:57:19.513715  143960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt with IP's: []
	I1004 00:57:19.596995  143960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt ...
	I1004 00:57:19.597025  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: {Name:mk57ffc0fb4e2ae2c78ac5cd62827aa834931c85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:19.597186  143960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.key ...
	I1004 00:57:19.597198  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.key: {Name:mk0b641834c0ccf6ae054268372febd760e4c947 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:19.597278  143960 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key.e9c877e8
	I1004 00:57:19.597293  143960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt.e9c877e8 with IP's: [192.168.39.57 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 00:57:19.804821  143960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt.e9c877e8 ...
	I1004 00:57:19.804852  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt.e9c877e8: {Name:mk33bb6ae5a776afd88c33474b01785c112bd1d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:19.805006  143960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key.e9c877e8 ...
	I1004 00:57:19.805018  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key.e9c877e8: {Name:mke0a9c2ab8d1bed2d414658b0d8b8c083d23c6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:19.805083  143960 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt.e9c877e8 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt
	I1004 00:57:19.805167  143960 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key.e9c877e8 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key
	I1004 00:57:19.805231  143960 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.key
	I1004 00:57:19.805246  143960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.crt with IP's: []
	I1004 00:57:20.128583  143960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.crt ...
	I1004 00:57:20.128619  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.crt: {Name:mk13933255936dbf448af45195308cd8b8365630 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:20.128782  143960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.key ...
	I1004 00:57:20.128793  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.key: {Name:mk56d67f55683d3b9f9e0ab8574d9fee1e12156b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:20.128855  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 00:57:20.128875  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 00:57:20.128889  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 00:57:20.128902  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 00:57:20.128918  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 00:57:20.128931  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 00:57:20.128943  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 00:57:20.128955  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 00:57:20.129006  143960 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 00:57:20.129041  143960 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 00:57:20.129049  143960 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 00:57:20.129072  143960 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 00:57:20.129121  143960 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 00:57:20.129154  143960 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 00:57:20.129190  143960 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 00:57:20.129218  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:57:20.129232  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem -> /usr/share/ca-certificates/135565.pem
	I1004 00:57:20.129243  143960 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /usr/share/ca-certificates/1355652.pem
	I1004 00:57:20.129829  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 00:57:20.154176  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 00:57:20.176927  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 00:57:20.199797  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 00:57:20.223417  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 00:57:20.246258  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 00:57:20.268711  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 00:57:20.291972  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 00:57:20.315474  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 00:57:20.338848  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 00:57:20.361709  143960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 00:57:20.383772  143960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 00:57:20.400226  143960 ssh_runner.go:195] Run: openssl version
	I1004 00:57:20.405687  143960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 00:57:20.415534  143960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:57:20.420237  143960 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:57:20.420295  143960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 00:57:20.425697  143960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 00:57:20.435327  143960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 00:57:20.445492  143960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 00:57:20.450167  143960 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 00:57:20.450233  143960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 00:57:20.455752  143960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 00:57:20.465210  143960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 00:57:20.474641  143960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 00:57:20.478845  143960 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 00:57:20.478882  143960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 00:57:20.484208  143960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 00:57:20.493456  143960 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 00:57:20.497227  143960 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 00:57:20.497280  143960 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-533597 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-533597 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:57:20.497363  143960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 00:57:20.497417  143960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 00:57:20.532478  143960 cri.go:89] found id: ""
	I1004 00:57:20.532551  143960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 00:57:20.541625  143960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 00:57:20.550429  143960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 00:57:20.559240  143960 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 00:57:20.559280  143960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1004 00:57:20.618324  143960 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1004 00:57:20.618621  143960 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 00:57:20.755589  143960 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 00:57:20.755726  143960 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 00:57:20.755844  143960 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 00:57:20.979108  143960 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 00:57:20.980106  143960 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 00:57:20.980185  143960 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 00:57:21.108042  143960 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 00:57:21.110317  143960 out.go:204]   - Generating certificates and keys ...
	I1004 00:57:21.110444  143960 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 00:57:21.110539  143960 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 00:57:21.183126  143960 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 00:57:21.290165  143960 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 00:57:21.408679  143960 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 00:57:21.480405  143960 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 00:57:21.536607  143960 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 00:57:21.536841  143960 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-533597 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I1004 00:57:21.601137  143960 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 00:57:21.601448  143960 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-533597 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I1004 00:57:21.707516  143960 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 00:57:22.103475  143960 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 00:57:22.177038  143960 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 00:57:22.177284  143960 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 00:57:22.289035  143960 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 00:57:22.491270  143960 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 00:57:22.866008  143960 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 00:57:23.247104  143960 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 00:57:23.249036  143960 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 00:57:23.252522  143960 out.go:204]   - Booting up control plane ...
	I1004 00:57:23.252658  143960 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 00:57:23.256079  143960 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 00:57:23.258027  143960 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 00:57:23.264330  143960 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 00:57:23.277159  143960 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 00:57:32.280679  143960 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003908 seconds
	I1004 00:57:32.280860  143960 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 00:57:32.296753  143960 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 00:57:32.819690  143960 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 00:57:32.819862  143960 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-533597 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1004 00:57:33.331817  143960 kubeadm.go:322] [bootstrap-token] Using token: qm1wrw.n3kf6wlmmnceumek
	I1004 00:57:33.333285  143960 out.go:204]   - Configuring RBAC rules ...
	I1004 00:57:33.333424  143960 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 00:57:33.340475  143960 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 00:57:33.350620  143960 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 00:57:33.354291  143960 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 00:57:33.357214  143960 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 00:57:33.364351  143960 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 00:57:33.381331  143960 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 00:57:33.664314  143960 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 00:57:33.754749  143960 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 00:57:33.754819  143960 kubeadm.go:322] 
	I1004 00:57:33.754968  143960 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 00:57:33.754983  143960 kubeadm.go:322] 
	I1004 00:57:33.755100  143960 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 00:57:33.755132  143960 kubeadm.go:322] 
	I1004 00:57:33.755175  143960 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 00:57:33.755272  143960 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 00:57:33.755354  143960 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 00:57:33.755365  143960 kubeadm.go:322] 
	I1004 00:57:33.755437  143960 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 00:57:33.755563  143960 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 00:57:33.755659  143960 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 00:57:33.755674  143960 kubeadm.go:322] 
	I1004 00:57:33.755782  143960 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 00:57:33.755898  143960 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 00:57:33.755914  143960 kubeadm.go:322] 
	I1004 00:57:33.756034  143960 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qm1wrw.n3kf6wlmmnceumek \
	I1004 00:57:33.756159  143960 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 00:57:33.756210  143960 kubeadm.go:322]     --control-plane 
	I1004 00:57:33.756217  143960 kubeadm.go:322] 
	I1004 00:57:33.756328  143960 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 00:57:33.756339  143960 kubeadm.go:322] 
	I1004 00:57:33.756434  143960 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qm1wrw.n3kf6wlmmnceumek \
	I1004 00:57:33.756587  143960 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 00:57:33.756805  143960 kubeadm.go:322] W1004 00:57:20.611973     959 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1004 00:57:33.756949  143960 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 00:57:33.757124  143960 kubeadm.go:322] W1004 00:57:23.251228     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1004 00:57:33.757320  143960 kubeadm.go:322] W1004 00:57:23.253097     959 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1004 00:57:33.757349  143960 cni.go:84] Creating CNI manager for ""
	I1004 00:57:33.757360  143960 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:57:33.760014  143960 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 00:57:33.761369  143960 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 00:57:33.772375  143960 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 00:57:33.792856  143960 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 00:57:33.792934  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:33.792934  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=ingress-addon-legacy-533597 minikube.k8s.io/updated_at=2023_10_04T00_57_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:33.974233  143960 ops.go:34] apiserver oom_adj: -16
	I1004 00:57:33.985439  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:34.180928  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:34.836852  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:35.336230  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:35.836462  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:36.336984  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:36.836279  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:37.336705  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:37.836113  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:38.336827  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:38.836969  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:39.336328  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:39.836218  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:40.337077  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:40.837101  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:41.336782  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:41.836505  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:42.336840  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:42.836139  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:43.336335  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:43.836118  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:44.336541  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:44.836657  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:45.336712  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:45.836965  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:46.336262  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:46.836703  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:47.336592  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:47.836864  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:48.337064  143960 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 00:57:48.510349  143960 kubeadm.go:1081] duration metric: took 14.717480325s to wait for elevateKubeSystemPrivileges.
	I1004 00:57:48.510398  143960 kubeadm.go:406] StartCluster complete in 28.013121644s
	I1004 00:57:48.510440  143960 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:48.510536  143960 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:57:48.511531  143960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 00:57:48.511802  143960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 00:57:48.511952  143960 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 00:57:48.512033  143960 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-533597"
	I1004 00:57:48.512049  143960 config.go:182] Loaded profile config "ingress-addon-legacy-533597": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1004 00:57:48.512051  143960 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-533597"
	I1004 00:57:48.512089  143960 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-533597"
	I1004 00:57:48.512057  143960 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-533597"
	I1004 00:57:48.512267  143960 host.go:66] Checking if "ingress-addon-legacy-533597" exists ...
	I1004 00:57:48.512617  143960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:57:48.512649  143960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:57:48.512695  143960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:57:48.512725  143960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:57:48.512590  143960 kapi.go:59] client config for ingress-addon-legacy-533597: &rest.Config{Host:"https://192.168.39.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 00:57:48.513503  143960 cert_rotation.go:137] Starting client certificate rotation controller
	I1004 00:57:48.528736  143960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40805
	I1004 00:57:48.529244  143960 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:57:48.529678  143960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41809
	I1004 00:57:48.529851  143960 main.go:141] libmachine: Using API Version  1
	I1004 00:57:48.529880  143960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:57:48.530146  143960 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:57:48.530261  143960 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:57:48.530514  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetState
	I1004 00:57:48.530682  143960 main.go:141] libmachine: Using API Version  1
	I1004 00:57:48.530701  143960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:57:48.531098  143960 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:57:48.531758  143960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:57:48.531818  143960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:57:48.533168  143960 kapi.go:59] client config for ingress-addon-legacy-533597: &rest.Config{Host:"https://192.168.39.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 00:57:48.533499  143960 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-533597"
	I1004 00:57:48.533543  143960 host.go:66] Checking if "ingress-addon-legacy-533597" exists ...
	I1004 00:57:48.533988  143960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:57:48.534037  143960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:57:48.547881  143960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I1004 00:57:48.548409  143960 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:57:48.548740  143960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35755
	I1004 00:57:48.548957  143960 main.go:141] libmachine: Using API Version  1
	I1004 00:57:48.548981  143960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:57:48.549111  143960 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:57:48.549401  143960 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:57:48.549577  143960 main.go:141] libmachine: Using API Version  1
	I1004 00:57:48.549599  143960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:57:48.549609  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetState
	I1004 00:57:48.549988  143960 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:57:48.550635  143960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:57:48.550671  143960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:57:48.551405  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:48.553579  143960 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 00:57:48.555394  143960 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 00:57:48.554911  143960 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-533597" context rescaled to 1 replicas
	I1004 00:57:48.555411  143960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 00:57:48.555430  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:48.555438  143960 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 00:57:48.557152  143960 out.go:177] * Verifying Kubernetes components...
	I1004 00:57:48.558751  143960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 00:57:48.559591  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:48.560028  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:48.560063  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:48.560302  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:48.560507  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:48.560687  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:48.560876  143960 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa Username:docker}
	I1004 00:57:48.567044  143960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42225
	I1004 00:57:48.567548  143960 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:57:48.568162  143960 main.go:141] libmachine: Using API Version  1
	I1004 00:57:48.568187  143960 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:57:48.568549  143960 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:57:48.568759  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetState
	I1004 00:57:48.570689  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .DriverName
	I1004 00:57:48.571009  143960 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 00:57:48.571030  143960 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 00:57:48.571053  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHHostname
	I1004 00:57:48.574615  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:48.575205  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:92:33", ip: ""} in network mk-ingress-addon-legacy-533597: {Iface:virbr1 ExpiryTime:2023-10-04 01:57:00 +0000 UTC Type:0 Mac:52:54:00:d6:92:33 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ingress-addon-legacy-533597 Clientid:01:52:54:00:d6:92:33}
	I1004 00:57:48.575257  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | domain ingress-addon-legacy-533597 has defined IP address 192.168.39.57 and MAC address 52:54:00:d6:92:33 in network mk-ingress-addon-legacy-533597
	I1004 00:57:48.575429  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHPort
	I1004 00:57:48.575632  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHKeyPath
	I1004 00:57:48.575828  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .GetSSHUsername
	I1004 00:57:48.576006  143960 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/ingress-addon-legacy-533597/id_rsa Username:docker}
	I1004 00:57:48.762435  143960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 00:57:48.774576  143960 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 00:57:48.827419  143960 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 00:57:48.828180  143960 kapi.go:59] client config for ingress-addon-legacy-533597: &rest.Config{Host:"https://192.168.39.57:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 00:57:48.828530  143960 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-533597" to be "Ready" ...
	I1004 00:57:48.834807  143960 node_ready.go:49] node "ingress-addon-legacy-533597" has status "Ready":"True"
	I1004 00:57:48.834833  143960 node_ready.go:38] duration metric: took 6.283048ms waiting for node "ingress-addon-legacy-533597" to be "Ready" ...
	I1004 00:57:48.834847  143960 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 00:57:48.845418  143960 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-2drc5" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:49.904099  143960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.14162656s)
	I1004 00:57:49.904156  143960 main.go:141] libmachine: Making call to close driver server
	I1004 00:57:49.904176  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .Close
	I1004 00:57:49.904470  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Closing plugin on server side
	I1004 00:57:49.904539  143960 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:57:49.904564  143960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:57:49.904619  143960 main.go:141] libmachine: Making call to close driver server
	I1004 00:57:49.904632  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .Close
	I1004 00:57:49.904883  143960 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:57:49.904925  143960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:57:49.904955  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Closing plugin on server side
	I1004 00:57:49.919570  143960 main.go:141] libmachine: Making call to close driver server
	I1004 00:57:49.919595  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .Close
	I1004 00:57:49.919893  143960 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:57:49.919952  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Closing plugin on server side
	I1004 00:57:49.919954  143960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:57:49.973056  143960 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.198432997s)
	I1004 00:57:49.973104  143960 main.go:141] libmachine: Making call to close driver server
	I1004 00:57:49.973119  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .Close
	I1004 00:57:49.973135  143960 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.145676478s)
	I1004 00:57:49.973159  143960 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 00:57:49.973428  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Closing plugin on server side
	I1004 00:57:49.973507  143960 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:57:49.973524  143960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:57:49.973552  143960 main.go:141] libmachine: Making call to close driver server
	I1004 00:57:49.973569  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) Calling .Close
	I1004 00:57:49.973928  143960 main.go:141] libmachine: (ingress-addon-legacy-533597) DBG | Closing plugin on server side
	I1004 00:57:49.973943  143960 main.go:141] libmachine: Successfully made call to close driver server
	I1004 00:57:49.973957  143960 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 00:57:49.975964  143960 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 00:57:49.977583  143960 addons.go:502] enable addons completed in 1.465631379s: enabled=[default-storageclass storage-provisioner]
	I1004 00:57:50.880741  143960 pod_ready.go:102] pod "coredns-66bff467f8-2drc5" in "kube-system" namespace has status "Ready":"False"
	I1004 00:57:53.377307  143960 pod_ready.go:102] pod "coredns-66bff467f8-2drc5" in "kube-system" namespace has status "Ready":"False"
	I1004 00:57:55.377405  143960 pod_ready.go:102] pod "coredns-66bff467f8-2drc5" in "kube-system" namespace has status "Ready":"False"
	I1004 00:57:55.877094  143960 pod_ready.go:92] pod "coredns-66bff467f8-2drc5" in "kube-system" namespace has status "Ready":"True"
	I1004 00:57:55.877126  143960 pod_ready.go:81] duration metric: took 7.03167318s waiting for pod "coredns-66bff467f8-2drc5" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.877139  143960 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.882179  143960 pod_ready.go:92] pod "etcd-ingress-addon-legacy-533597" in "kube-system" namespace has status "Ready":"True"
	I1004 00:57:55.882207  143960 pod_ready.go:81] duration metric: took 5.058564ms waiting for pod "etcd-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.882225  143960 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.886921  143960 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-533597" in "kube-system" namespace has status "Ready":"True"
	I1004 00:57:55.886947  143960 pod_ready.go:81] duration metric: took 4.712922ms waiting for pod "kube-apiserver-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.886960  143960 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.891385  143960 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-533597" in "kube-system" namespace has status "Ready":"True"
	I1004 00:57:55.891404  143960 pod_ready.go:81] duration metric: took 4.437182ms waiting for pod "kube-controller-manager-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.891412  143960 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mtmkq" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.895492  143960 pod_ready.go:92] pod "kube-proxy-mtmkq" in "kube-system" namespace has status "Ready":"True"
	I1004 00:57:55.895510  143960 pod_ready.go:81] duration metric: took 4.092345ms waiting for pod "kube-proxy-mtmkq" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:55.895518  143960 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:56.069850  143960 request.go:629] Waited for 174.268673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-533597
	I1004 00:57:56.270542  143960 request.go:629] Waited for 197.347875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ingress-addon-legacy-533597
	I1004 00:57:56.273809  143960 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-533597" in "kube-system" namespace has status "Ready":"True"
	I1004 00:57:56.273829  143960 pod_ready.go:81] duration metric: took 378.305007ms waiting for pod "kube-scheduler-ingress-addon-legacy-533597" in "kube-system" namespace to be "Ready" ...
	I1004 00:57:56.273854  143960 pod_ready.go:38] duration metric: took 7.438980838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 00:57:56.273871  143960 api_server.go:52] waiting for apiserver process to appear ...
	I1004 00:57:56.273924  143960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 00:57:56.287862  143960 api_server.go:72] duration metric: took 7.73237879s to wait for apiserver process to appear ...
	I1004 00:57:56.287890  143960 api_server.go:88] waiting for apiserver healthz status ...
	I1004 00:57:56.287907  143960 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I1004 00:57:56.293893  143960 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I1004 00:57:56.294938  143960 api_server.go:141] control plane version: v1.18.20
	I1004 00:57:56.294970  143960 api_server.go:131] duration metric: took 7.0724ms to wait for apiserver health ...
	I1004 00:57:56.294980  143960 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 00:57:56.470554  143960 request.go:629] Waited for 175.482254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I1004 00:57:56.477758  143960 system_pods.go:59] 7 kube-system pods found
	I1004 00:57:56.477792  143960 system_pods.go:61] "coredns-66bff467f8-2drc5" [1ce0631f-a6aa-4b31-8a19-35aacfb539bc] Running
	I1004 00:57:56.477800  143960 system_pods.go:61] "etcd-ingress-addon-legacy-533597" [2fb7b67b-2bfd-4682-b1ef-35c64e81260d] Running
	I1004 00:57:56.477806  143960 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-533597" [e304b34e-2230-45f5-a579-1431f7ce3c70] Running
	I1004 00:57:56.477811  143960 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-533597" [574a24a0-5047-415a-b331-b716706d6f77] Running
	I1004 00:57:56.477817  143960 system_pods.go:61] "kube-proxy-mtmkq" [6b0cc892-5df9-4cd4-93fd-89eac73552e9] Running
	I1004 00:57:56.477829  143960 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-533597" [34393f73-75a9-4db1-8ac2-7cab9274a1d7] Running
	I1004 00:57:56.477835  143960 system_pods.go:61] "storage-provisioner" [31aa27e2-76e0-4f02-89f7-e408016e86c3] Running
	I1004 00:57:56.477858  143960 system_pods.go:74] duration metric: took 182.868669ms to wait for pod list to return data ...
	I1004 00:57:56.477872  143960 default_sa.go:34] waiting for default service account to be created ...
	I1004 00:57:56.670380  143960 request.go:629] Waited for 192.409847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I1004 00:57:56.673289  143960 default_sa.go:45] found service account: "default"
	I1004 00:57:56.673321  143960 default_sa.go:55] duration metric: took 195.440548ms for default service account to be created ...
	I1004 00:57:56.673332  143960 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 00:57:56.869766  143960 request.go:629] Waited for 196.315824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I1004 00:57:56.876541  143960 system_pods.go:86] 7 kube-system pods found
	I1004 00:57:56.876572  143960 system_pods.go:89] "coredns-66bff467f8-2drc5" [1ce0631f-a6aa-4b31-8a19-35aacfb539bc] Running
	I1004 00:57:56.876577  143960 system_pods.go:89] "etcd-ingress-addon-legacy-533597" [2fb7b67b-2bfd-4682-b1ef-35c64e81260d] Running
	I1004 00:57:56.876581  143960 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-533597" [e304b34e-2230-45f5-a579-1431f7ce3c70] Running
	I1004 00:57:56.876588  143960 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-533597" [574a24a0-5047-415a-b331-b716706d6f77] Running
	I1004 00:57:56.876591  143960 system_pods.go:89] "kube-proxy-mtmkq" [6b0cc892-5df9-4cd4-93fd-89eac73552e9] Running
	I1004 00:57:56.876595  143960 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-533597" [34393f73-75a9-4db1-8ac2-7cab9274a1d7] Running
	I1004 00:57:56.876599  143960 system_pods.go:89] "storage-provisioner" [31aa27e2-76e0-4f02-89f7-e408016e86c3] Running
	I1004 00:57:56.876605  143960 system_pods.go:126] duration metric: took 203.267433ms to wait for k8s-apps to be running ...
	I1004 00:57:56.876614  143960 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 00:57:56.876669  143960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 00:57:56.889733  143960 system_svc.go:56] duration metric: took 13.104389ms WaitForService to wait for kubelet.
	I1004 00:57:56.889763  143960 kubeadm.go:581] duration metric: took 8.334286998s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 00:57:56.889787  143960 node_conditions.go:102] verifying NodePressure condition ...
	I1004 00:57:57.070189  143960 request.go:629] Waited for 180.327985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes
	I1004 00:57:57.074072  143960 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 00:57:57.074106  143960 node_conditions.go:123] node cpu capacity is 2
	I1004 00:57:57.074117  143960 node_conditions.go:105] duration metric: took 184.32539ms to run NodePressure ...
	I1004 00:57:57.074129  143960 start.go:228] waiting for startup goroutines ...
	I1004 00:57:57.074137  143960 start.go:233] waiting for cluster config update ...
	I1004 00:57:57.074145  143960 start.go:242] writing updated cluster config ...
	I1004 00:57:57.074364  143960 ssh_runner.go:195] Run: rm -f paused
	I1004 00:57:57.123666  143960 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1004 00:57:57.125726  143960 out.go:177] 
	W1004 00:57:57.127262  143960 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1004 00:57:57.128749  143960 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1004 00:57:57.130042  143960 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-533597" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 00:56:56 UTC, ends at Wed 2023-10-04 01:01:12 UTC. --
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.481036654Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381272481024766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=ca1e9bad-e31b-46a5-aa38-f54ec095209d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.481691160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f5d1ffa5-7919-4819-94b5-ed679b6a9297 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.481775363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f5d1ffa5-7919-4819-94b5-ed679b6a9297 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.482438217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:928b0cd47c8a68c76cd192bf977bac4e8bcb579e47a41ee02f72056c40f579bd,PodSandboxId:031617a0ba7894ce8ab57c9c6dd13a8ab09f41544c1929b7a4c6151c8298b83e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696381258415966534,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-82fh5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91b3b40d-a03d-4246-a01c-448ec71deb68,},Annotations:map[string]string{io.kubernetes.container.hash: 71d31e78,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd65c068fd6952c14de3df020a6f32d9b6872fb2654b0fde14133a98ee6ccc8d,PodSandboxId:1f1679c24e28e7b72055519254b7530c78de97f019a862571ce0a5e932e96d8b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696381117958832423,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2db4a68-cb2d-46e7-a035-b1952849bb0a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: dd2c74b4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072b178e10d41cf37e6d544e9b6957ca8458a147079552a8d20cd2b2b16b07e1,PodSandboxId:ff496f2c8f9584bb4f5ed099f7d7fac384909ede2f7d86d1004621d0771fe446,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696381093420715952,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-shzbh,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d,},Annotations:map[string]string{io.kubernetes.container.hash: c5a76b80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:39f819660608c05346d450fe44c8f9946afa35b437ab147e52b871bd98174f77,PodSandboxId:e586ea44e9e46c4d60e94d753724fb315965a928d6beeed1c039ca74477e63f3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082999077905,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ttlpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17a4b23c-278e-4ce3-a25c-af727f10bce5,},Annotations:map[string]string{io.kubernetes.container.hash: df784c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c2fb8be53360de8f9c406ac86003f195540f400b4b36f34441c1a6dc1f2360,PodSandboxId:db19f4205260bd477176e0c173fec6760c24cd3ce271e6bf73b30da9bede3fd9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082077499114,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-brcpb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e3fb2cee-af20-4a02-990d-f4737a0006ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfaf239,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813f5aedb5cfdf95402df3244e25a7d82e2ec2129b5e18204891b2b094c75d5,PodSandboxId:124298a880e00ed73148295f18bcaa1aeec657b21b6ca0b21c29bf329574bde0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381070798060392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31aa27e2-76e0-4f02-89f7-e408016e86c3,},Annotations:map[string]string{io.kubernetes.container.hash: e15b3511,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b36b3f16a8036ba3645a1e6d1aa3ce160e840d8a6be6128f150a56bcfcdb34,PodSandboxId:20443346bd1550b8195ea4a436a4226904f4bc4de181f59d56684cbff8aed2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696381070438109465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2drc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce0631f-a6aa-4b31-8a19-35aacfb539bc,},Annotations:map[string]string{io.kubernetes.container.hash: 87deb8da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d9f7c97f11bb057802af03237
455322358f351c64f12a03ab4149712eeeeb,PodSandboxId:e817a0243d51b3235d5a45d5295e1672b91ddfa86b54d3ea50660a42cca3943a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696381069866472171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mtmkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0cc892-5df9-4cd4-93fd-89eac73552e9,},Annotations:map[string]string{io.kubernetes.container.hash: 4cdd997a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0eaa4e1c4458946960fcc7e86c61dbeac026c81e9bcbdf89aa7d1b218cb3f1,Pod
SandboxId:69d73bb3d77ad5717c39d3e2375ff57619fc534eac44e6c8ca3ef9f4921f85c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696381046263209679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b59e32de7cff509c2223013237133fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc00cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bdcf7356c9dc2fe2766d9694a4a45866e0e69cb428ad530f4dc68d0e7e7d2a,PodSandboxId:17475f4e3ec32d9d275d5630ca1769b305dc
dd76444c43d568f4aed3faf9da7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696381044879919527,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa29fb7a9d08a621525466d6184002e89d69f5ad3a1211e7f338ab2aff3e2d70,PodSandboxId:1181f9a9610a9841e6f86f1132f4db2e79b76febdd
162f1d735b05d605521d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696381044846927407,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d8ff35c8d7ac24af565779fb5489f3003397edc2edec57e362c42143bb205d,PodSandboxId:5005fa390485266664f02cd797c3c2637067698a737dc03c
0a4227db04ab14eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696381044683468895,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f5d1ffa5-7919-4819-94b5-ed679b6a9297 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.522612013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=84d3e920-73b5-4a60-b0fc-b4b2e9d1c201 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.522667392Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=84d3e920-73b5-4a60-b0fc-b4b2e9d1c201 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.524466557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=83dee677-4f84-48cf-ac0c-7e915b84d48b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.525024421Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381272525009230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=83dee677-4f84-48cf-ac0c-7e915b84d48b name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.525624188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bde322b-6a58-4c7c-a310-250a6c9805bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.525707438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bde322b-6a58-4c7c-a310-250a6c9805bf name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.526023105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:928b0cd47c8a68c76cd192bf977bac4e8bcb579e47a41ee02f72056c40f579bd,PodSandboxId:031617a0ba7894ce8ab57c9c6dd13a8ab09f41544c1929b7a4c6151c8298b83e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696381258415966534,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-82fh5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91b3b40d-a03d-4246-a01c-448ec71deb68,},Annotations:map[string]string{io.kubernetes.container.hash: 71d31e78,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd65c068fd6952c14de3df020a6f32d9b6872fb2654b0fde14133a98ee6ccc8d,PodSandboxId:1f1679c24e28e7b72055519254b7530c78de97f019a862571ce0a5e932e96d8b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696381117958832423,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2db4a68-cb2d-46e7-a035-b1952849bb0a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: dd2c74b4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072b178e10d41cf37e6d544e9b6957ca8458a147079552a8d20cd2b2b16b07e1,PodSandboxId:ff496f2c8f9584bb4f5ed099f7d7fac384909ede2f7d86d1004621d0771fe446,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696381093420715952,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-shzbh,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d,},Annotations:map[string]string{io.kubernetes.container.hash: c5a76b80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:39f819660608c05346d450fe44c8f9946afa35b437ab147e52b871bd98174f77,PodSandboxId:e586ea44e9e46c4d60e94d753724fb315965a928d6beeed1c039ca74477e63f3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082999077905,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ttlpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17a4b23c-278e-4ce3-a25c-af727f10bce5,},Annotations:map[string]string{io.kubernetes.container.hash: df784c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c2fb8be53360de8f9c406ac86003f195540f400b4b36f34441c1a6dc1f2360,PodSandboxId:db19f4205260bd477176e0c173fec6760c24cd3ce271e6bf73b30da9bede3fd9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082077499114,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-brcpb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e3fb2cee-af20-4a02-990d-f4737a0006ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfaf239,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813f5aedb5cfdf95402df3244e25a7d82e2ec2129b5e18204891b2b094c75d5,PodSandboxId:124298a880e00ed73148295f18bcaa1aeec657b21b6ca0b21c29bf329574bde0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381070798060392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31aa27e2-76e0-4f02-89f7-e408016e86c3,},Annotations:map[string]string{io.kubernetes.container.hash: e15b3511,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b36b3f16a8036ba3645a1e6d1aa3ce160e840d8a6be6128f150a56bcfcdb34,PodSandboxId:20443346bd1550b8195ea4a436a4226904f4bc4de181f59d56684cbff8aed2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696381070438109465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2drc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce0631f-a6aa-4b31-8a19-35aacfb539bc,},Annotations:map[string]string{io.kubernetes.container.hash: 87deb8da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d9f7c97f11bb057802af03237
455322358f351c64f12a03ab4149712eeeeb,PodSandboxId:e817a0243d51b3235d5a45d5295e1672b91ddfa86b54d3ea50660a42cca3943a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696381069866472171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mtmkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0cc892-5df9-4cd4-93fd-89eac73552e9,},Annotations:map[string]string{io.kubernetes.container.hash: 4cdd997a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0eaa4e1c4458946960fcc7e86c61dbeac026c81e9bcbdf89aa7d1b218cb3f1,Pod
SandboxId:69d73bb3d77ad5717c39d3e2375ff57619fc534eac44e6c8ca3ef9f4921f85c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696381046263209679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b59e32de7cff509c2223013237133fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc00cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bdcf7356c9dc2fe2766d9694a4a45866e0e69cb428ad530f4dc68d0e7e7d2a,PodSandboxId:17475f4e3ec32d9d275d5630ca1769b305dc
dd76444c43d568f4aed3faf9da7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696381044879919527,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa29fb7a9d08a621525466d6184002e89d69f5ad3a1211e7f338ab2aff3e2d70,PodSandboxId:1181f9a9610a9841e6f86f1132f4db2e79b76febdd
162f1d735b05d605521d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696381044846927407,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d8ff35c8d7ac24af565779fb5489f3003397edc2edec57e362c42143bb205d,PodSandboxId:5005fa390485266664f02cd797c3c2637067698a737dc03c
0a4227db04ab14eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696381044683468895,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bde322b-6a58-4c7c-a310-250a6c9805bf name=/runtime.v1.RuntimeSer
vice/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.566483012Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d58cf441-30d5-4b77-bae0-638cf8cd81bd name=/runtime.v1.RuntimeService/Version
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.566564469Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d58cf441-30d5-4b77-bae0-638cf8cd81bd name=/runtime.v1.RuntimeService/Version
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.567988717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4f24d56e-320c-4b8c-8291-22926ca8fbb8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.568570398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381272568553845,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=4f24d56e-320c-4b8c-8291-22926ca8fbb8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.569147428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=584fdc3c-6d0f-443e-8f5e-e437e03899a6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.569192436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=584fdc3c-6d0f-443e-8f5e-e437e03899a6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.569543510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:928b0cd47c8a68c76cd192bf977bac4e8bcb579e47a41ee02f72056c40f579bd,PodSandboxId:031617a0ba7894ce8ab57c9c6dd13a8ab09f41544c1929b7a4c6151c8298b83e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696381258415966534,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-82fh5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91b3b40d-a03d-4246-a01c-448ec71deb68,},Annotations:map[string]string{io.kubernetes.container.hash: 71d31e78,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd65c068fd6952c14de3df020a6f32d9b6872fb2654b0fde14133a98ee6ccc8d,PodSandboxId:1f1679c24e28e7b72055519254b7530c78de97f019a862571ce0a5e932e96d8b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696381117958832423,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2db4a68-cb2d-46e7-a035-b1952849bb0a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: dd2c74b4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072b178e10d41cf37e6d544e9b6957ca8458a147079552a8d20cd2b2b16b07e1,PodSandboxId:ff496f2c8f9584bb4f5ed099f7d7fac384909ede2f7d86d1004621d0771fe446,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696381093420715952,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-shzbh,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d,},Annotations:map[string]string{io.kubernetes.container.hash: c5a76b80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:39f819660608c05346d450fe44c8f9946afa35b437ab147e52b871bd98174f77,PodSandboxId:e586ea44e9e46c4d60e94d753724fb315965a928d6beeed1c039ca74477e63f3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082999077905,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ttlpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17a4b23c-278e-4ce3-a25c-af727f10bce5,},Annotations:map[string]string{io.kubernetes.container.hash: df784c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c2fb8be53360de8f9c406ac86003f195540f400b4b36f34441c1a6dc1f2360,PodSandboxId:db19f4205260bd477176e0c173fec6760c24cd3ce271e6bf73b30da9bede3fd9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082077499114,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-brcpb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e3fb2cee-af20-4a02-990d-f4737a0006ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfaf239,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813f5aedb5cfdf95402df3244e25a7d82e2ec2129b5e18204891b2b094c75d5,PodSandboxId:124298a880e00ed73148295f18bcaa1aeec657b21b6ca0b21c29bf329574bde0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381070798060392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31aa27e2-76e0-4f02-89f7-e408016e86c3,},Annotations:map[string]string{io.kubernetes.container.hash: e15b3511,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b36b3f16a8036ba3645a1e6d1aa3ce160e840d8a6be6128f150a56bcfcdb34,PodSandboxId:20443346bd1550b8195ea4a436a4226904f4bc4de181f59d56684cbff8aed2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696381070438109465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2drc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce0631f-a6aa-4b31-8a19-35aacfb539bc,},Annotations:map[string]string{io.kubernetes.container.hash: 87deb8da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d9f7c97f11bb057802af03237
455322358f351c64f12a03ab4149712eeeeb,PodSandboxId:e817a0243d51b3235d5a45d5295e1672b91ddfa86b54d3ea50660a42cca3943a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696381069866472171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mtmkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0cc892-5df9-4cd4-93fd-89eac73552e9,},Annotations:map[string]string{io.kubernetes.container.hash: 4cdd997a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0eaa4e1c4458946960fcc7e86c61dbeac026c81e9bcbdf89aa7d1b218cb3f1,Pod
SandboxId:69d73bb3d77ad5717c39d3e2375ff57619fc534eac44e6c8ca3ef9f4921f85c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696381046263209679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b59e32de7cff509c2223013237133fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc00cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bdcf7356c9dc2fe2766d9694a4a45866e0e69cb428ad530f4dc68d0e7e7d2a,PodSandboxId:17475f4e3ec32d9d275d5630ca1769b305dc
dd76444c43d568f4aed3faf9da7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696381044879919527,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa29fb7a9d08a621525466d6184002e89d69f5ad3a1211e7f338ab2aff3e2d70,PodSandboxId:1181f9a9610a9841e6f86f1132f4db2e79b76febdd
162f1d735b05d605521d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696381044846927407,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d8ff35c8d7ac24af565779fb5489f3003397edc2edec57e362c42143bb205d,PodSandboxId:5005fa390485266664f02cd797c3c2637067698a737dc03c
0a4227db04ab14eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696381044683468895,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=584fdc3c-6d0f-443e-8f5e-e437e03899a6 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.603927507Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c0f2f9b6-69c8-4ead-af1d-b2fc4f64cf7c name=/runtime.v1.RuntimeService/Version
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.603983305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c0f2f9b6-69c8-4ead-af1d-b2fc4f64cf7c name=/runtime.v1.RuntimeService/Version
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.605051220Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dbbc5b64-90d6-4a3e-ac32-f60cdfb0d656 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.605570095Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381272605555737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202351,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=dbbc5b64-90d6-4a3e-ac32-f60cdfb0d656 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.606216806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d687d744-87c2-421a-b317-a8926eec28fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.606263345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d687d744-87c2-421a-b317-a8926eec28fe name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:01:12 ingress-addon-legacy-533597 crio[719]: time="2023-10-04 01:01:12.606582097Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:928b0cd47c8a68c76cd192bf977bac4e8bcb579e47a41ee02f72056c40f579bd,PodSandboxId:031617a0ba7894ce8ab57c9c6dd13a8ab09f41544c1929b7a4c6151c8298b83e,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6,State:CONTAINER_RUNNING,CreatedAt:1696381258415966534,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-82fh5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 91b3b40d-a03d-4246-a01c-448ec71deb68,},Annotations:map[string]string{io.kubernetes.container.hash: 71d31e78,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd65c068fd6952c14de3df020a6f32d9b6872fb2654b0fde14133a98ee6ccc8d,PodSandboxId:1f1679c24e28e7b72055519254b7530c78de97f019a862571ce0a5e932e96d8b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14,State:CONTAINER_RUNNING,CreatedAt:1696381117958832423,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e2db4a68-cb2d-46e7-a035-b1952849bb0a,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: dd2c74b4,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:072b178e10d41cf37e6d544e9b6957ca8458a147079552a8d20cd2b2b16b07e1,PodSandboxId:ff496f2c8f9584bb4f5ed099f7d7fac384909ede2f7d86d1004621d0771fe446,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1696381093420715952,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-shzbh,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d,},Annotations:map[string]string{io.kubernetes.container.hash: c5a76b80,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:39f819660608c05346d450fe44c8f9946afa35b437ab147e52b871bd98174f77,PodSandboxId:e586ea44e9e46c4d60e94d753724fb315965a928d6beeed1c039ca74477e63f3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082999077905,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-ttlpn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 17a4b23c-278e-4ce3-a25c-af727f10bce5,},Annotations:map[string]string{io.kubernetes.container.hash: df784c59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0c2fb8be53360de8f9c406ac86003f195540f400b4b36f34441c1a6dc1f2360,PodSandboxId:db19f4205260bd477176e0c173fec6760c24cd3ce271e6bf73b30da9bede3fd9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1696381082077499114,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-brcpb,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e3fb2cee-af20-4a02-990d-f4737a0006ca,},Annotations:map[string]string{io.kubernetes.container.hash: 6bfaf239,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7813f5aedb5cfdf95402df3244e25a7d82e2ec2129b5e18204891b2b094c75d5,PodSandboxId:124298a880e00ed73148295f18bcaa1aeec657b21b6ca0b21c29bf329574bde0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Imag
e:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381070798060392,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 31aa27e2-76e0-4f02-89f7-e408016e86c3,},Annotations:map[string]string{io.kubernetes.container.hash: e15b3511,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1b36b3f16a8036ba3645a1e6d1aa3ce160e840d8a6be6128f150a56bcfcdb34,PodSandboxId:20443346bd1550b8195ea4a436a4226904f4bc4de181f59d56684cbff8aed2f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1696381070438109465,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-2drc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ce0631f-a6aa-4b31-8a19-35aacfb539bc,},Annotations:map[string]string{io.kubernetes.container.hash: 87deb8da,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d19d9f7c97f11bb057802af03237
455322358f351c64f12a03ab4149712eeeeb,PodSandboxId:e817a0243d51b3235d5a45d5295e1672b91ddfa86b54d3ea50660a42cca3943a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1696381069866472171,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mtmkq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b0cc892-5df9-4cd4-93fd-89eac73552e9,},Annotations:map[string]string{io.kubernetes.container.hash: 4cdd997a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b0eaa4e1c4458946960fcc7e86c61dbeac026c81e9bcbdf89aa7d1b218cb3f1,Pod
SandboxId:69d73bb3d77ad5717c39d3e2375ff57619fc534eac44e6c8ca3ef9f4921f85c2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1696381046263209679,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b59e32de7cff509c2223013237133fd5,},Annotations:map[string]string{io.kubernetes.container.hash: 1dc00cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8bdcf7356c9dc2fe2766d9694a4a45866e0e69cb428ad530f4dc68d0e7e7d2a,PodSandboxId:17475f4e3ec32d9d275d5630ca1769b305dc
dd76444c43d568f4aed3faf9da7c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1696381044879919527,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa29fb7a9d08a621525466d6184002e89d69f5ad3a1211e7f338ab2aff3e2d70,PodSandboxId:1181f9a9610a9841e6f86f1132f4db2e79b76febdd
162f1d735b05d605521d9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1696381044846927407,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9f922c35540c0fadcadcb2a82cc505c,},Annotations:map[string]string{io.kubernetes.container.hash: 730b4701,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42d8ff35c8d7ac24af565779fb5489f3003397edc2edec57e362c42143bb205d,PodSandboxId:5005fa390485266664f02cd797c3c2637067698a737dc03c
0a4227db04ab14eb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1696381044683468895,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-533597,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d687d744-87c2-421a-b317-a8926eec28fe name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	928b0cd47c8a6       gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6            14 seconds ago      Running             hello-world-app           0                   031617a0ba789       hello-world-app-5f5d8b66bb-82fh5
	dd65c068fd695       docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14                    2 minutes ago       Running             nginx                     0                   1f1679c24e28e       nginx
	072b178e10d41       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   ff496f2c8f958       ingress-nginx-controller-7fcf777cb7-shzbh
	39f819660608c       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   e586ea44e9e46       ingress-nginx-admission-patch-ttlpn
	f0c2fb8be5336       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   db19f4205260b       ingress-nginx-admission-create-brcpb
	7813f5aedb5cf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   124298a880e00       storage-provisioner
	e1b36b3f16a80       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   20443346bd155       coredns-66bff467f8-2drc5
	d19d9f7c97f11       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   e817a0243d51b       kube-proxy-mtmkq
	3b0eaa4e1c445       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   69d73bb3d77ad       etcd-ingress-addon-legacy-533597
	b8bdcf7356c9d       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   17475f4e3ec32       kube-scheduler-ingress-addon-legacy-533597
	fa29fb7a9d08a       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   1181f9a9610a9       kube-apiserver-ingress-addon-legacy-533597
	42d8ff35c8d7a       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   5005fa3904852       kube-controller-manager-ingress-addon-legacy-533597
	
	* 
	* ==> coredns [e1b36b3f16a8036ba3645a1e6d1aa3ce160e840d8a6be6128f150a56bcfcdb34] <==
	* [INFO] 10.244.0.5:48835 - 13599 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.0000637s
	[INFO] 10.244.0.5:48835 - 13523 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000326454s
	[INFO] 10.244.0.5:48835 - 41209 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000078343s
	[INFO] 10.244.0.5:44314 - 54025 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000091667s
	[INFO] 10.244.0.5:48835 - 53692 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000143752s
	[INFO] 10.244.0.5:44314 - 12193 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00003573s
	[INFO] 10.244.0.5:44314 - 175 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084185s
	[INFO] 10.244.0.5:44314 - 57870 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097427s
	[INFO] 10.244.0.5:44314 - 48507 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044499s
	[INFO] 10.244.0.5:44314 - 29585 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00002718s
	[INFO] 10.244.0.5:44314 - 18418 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000204814s
	[INFO] 10.244.0.5:42580 - 40124 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100437s
	[INFO] 10.244.0.5:60177 - 55107 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000034615s
	[INFO] 10.244.0.5:42580 - 6156 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00004873s
	[INFO] 10.244.0.5:60177 - 22544 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000024258s
	[INFO] 10.244.0.5:60177 - 6564 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000038013s
	[INFO] 10.244.0.5:42580 - 33133 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102992s
	[INFO] 10.244.0.5:60177 - 8825 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039898s
	[INFO] 10.244.0.5:42580 - 33924 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000112254s
	[INFO] 10.244.0.5:42580 - 36274 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070614s
	[INFO] 10.244.0.5:60177 - 61376 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000115583s
	[INFO] 10.244.0.5:60177 - 36441 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006195s
	[INFO] 10.244.0.5:42580 - 21487 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000115798s
	[INFO] 10.244.0.5:60177 - 60502 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000122795s
	[INFO] 10.244.0.5:42580 - 12363 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056286s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-533597
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-533597
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=ingress-addon-legacy-533597
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T00_57_33_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 00:57:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-533597
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:01:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:01:04 +0000   Wed, 04 Oct 2023 00:57:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:01:04 +0000   Wed, 04 Oct 2023 00:57:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:01:04 +0000   Wed, 04 Oct 2023 00:57:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:01:04 +0000   Wed, 04 Oct 2023 00:57:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ingress-addon-legacy-533597
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 034196d6dd16499490702b8faf67654a
	  System UUID:                034196d6-dd16-4994-9070-2b8faf67654a
	  Boot ID:                    e07d4ee7-c0ee-4776-8fbe-9bc8735a27d6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-82fh5                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 coredns-66bff467f8-2drc5                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m24s
	  kube-system                 etcd-ingress-addon-legacy-533597                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-533597             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-533597    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-mtmkq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-ingress-addon-legacy-533597             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m39s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s  kubelet     Node ingress-addon-legacy-533597 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s  kubelet     Node ingress-addon-legacy-533597 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s  kubelet     Node ingress-addon-legacy-533597 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m28s  kubelet     Node ingress-addon-legacy-533597 status is now: NodeReady
	  Normal  Starting                 3m22s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 4 00:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.099927] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.418909] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.609741] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142417] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Oct 4 00:57] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000043] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000004] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.598020] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.108782] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.151597] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.100796] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.231762] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[  +8.197429] systemd-fstab-generator[1029]: Ignoring "noauto" for root device
	[  +2.228770] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.161407] systemd-fstab-generator[1421]: Ignoring "noauto" for root device
	[ +16.731849] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.323713] kauditd_printk_skb: 11 callbacks suppressed
	[Oct 4 00:58] kauditd_printk_skb: 6 callbacks suppressed
	[ +30.390606] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.973019] kauditd_printk_skb: 3 callbacks suppressed
	[Oct 4 01:01] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [3b0eaa4e1c4458946960fcc7e86c61dbeac026c81e9bcbdf89aa7d1b218cb3f1] <==
	* 2023-10-04 00:57:26.431948 W | auth: simple token is not cryptographically signed
	2023-10-04 00:57:26.436373 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-10-04 00:57:26.439082 I | etcdserver: 79ee2fa200dbf73d as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/10/04 00:57:26 INFO: 79ee2fa200dbf73d switched to configuration voters=(8786012295892039485)
	2023-10-04 00:57:26.439706 I | etcdserver/membership: added member 79ee2fa200dbf73d [https://192.168.39.57:2380] to cluster cdb6bc6ece496785
	2023-10-04 00:57:26.440951 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-04 00:57:26.441138 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-04 00:57:26.441198 I | embed: listening for peers on 192.168.39.57:2380
	raft2023/10/04 00:57:27 INFO: 79ee2fa200dbf73d is starting a new election at term 1
	raft2023/10/04 00:57:27 INFO: 79ee2fa200dbf73d became candidate at term 2
	raft2023/10/04 00:57:27 INFO: 79ee2fa200dbf73d received MsgVoteResp from 79ee2fa200dbf73d at term 2
	raft2023/10/04 00:57:27 INFO: 79ee2fa200dbf73d became leader at term 2
	raft2023/10/04 00:57:27 INFO: raft.node: 79ee2fa200dbf73d elected leader 79ee2fa200dbf73d at term 2
	2023-10-04 00:57:27.424201 I | etcdserver: published {Name:ingress-addon-legacy-533597 ClientURLs:[https://192.168.39.57:2379]} to cluster cdb6bc6ece496785
	2023-10-04 00:57:27.424541 I | embed: ready to serve client requests
	2023-10-04 00:57:27.424856 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-04 00:57:27.425021 I | embed: ready to serve client requests
	2023-10-04 00:57:27.425910 I | embed: serving client requests on 192.168.39.57:2379
	2023-10-04 00:57:27.426396 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-04 00:57:27.426482 I | etcdserver/api: enabled capabilities for version 3.4
	2023-10-04 00:57:27.428909 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-04 00:57:49.836379 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/storage-provisioner\" " with result "range_response_count:0 size:5" took too long (483.888312ms) to execute
	2023-10-04 00:57:49.866699 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" " with result "range_response_count:1 size:703" took too long (512.394537ms) to execute
	2023-10-04 00:57:49.867750 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (493.580103ms) to execute
	2023-10-04 00:57:49.868563 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-66bff467f8-2drc5\" " with result "range_response_count:1 size:3656" took too long (494.88433ms) to execute
	
	* 
	* ==> kernel <==
	*  01:01:12 up 4 min,  0 users,  load average: 0.34, 0.52, 0.25
	Linux ingress-addon-legacy-533597 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [fa29fb7a9d08a621525466d6184002e89d69f5ad3a1211e7f338ab2aff3e2d70] <==
	* I1004 00:57:31.371409       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1004 00:57:31.371500       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1004 00:57:31.378878       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1004 00:57:31.387931       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1004 00:57:31.387989       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1004 00:57:31.866369       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 00:57:31.913219       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1004 00:57:32.019714       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.57]
	I1004 00:57:32.020546       1 controller.go:609] quota admission added evaluator for: endpoints
	I1004 00:57:32.028654       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 00:57:32.728466       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1004 00:57:33.599455       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1004 00:57:33.736142       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1004 00:57:34.050921       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 00:57:48.138181       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1004 00:57:48.209662       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1004 00:57:49.873741       1 trace.go:116] Trace[1389401620]: "Get" url:/apis/storage.k8s.io/v1/storageclasses/standard,user-agent:kubectl/v1.18.20 (linux/amd64) kubernetes/1f3e19b,client:127.0.0.1 (started: 2023-10-04 00:57:49.362866544 +0000 UTC m=+24.319392610) (total time: 510.839801ms):
	Trace[1389401620]: [510.839801ms] [510.830975ms] END
	I1004 00:57:49.874647       1 trace.go:116] Trace[1210886602]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-66bff467f8-2drc5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.39.1 (started: 2023-10-04 00:57:49.368765664 +0000 UTC m=+24.325291678) (total time: 505.859659ms):
	Trace[1210886602]: [504.64392ms] [504.637504ms] About to write a response
	I1004 00:57:49.875613       1 trace.go:116] Trace[1586210074]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/tokens-controller,client:192.168.39.57 (started: 2023-10-04 00:57:49.35317837 +0000 UTC m=+24.309704399) (total time: 522.312604ms):
	Trace[1586210074]: [522.272367ms] [522.267813ms] About to write a response
	I1004 00:57:57.988661       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1004 00:58:32.941394       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1004 01:01:05.148131       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [42d8ff35c8d7ac24af565779fb5489f3003397edc2edec57e362c42143bb205d] <==
	* I1004 00:57:48.332358       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-533597", UID:"bf26b09e-9586-440b-8373-2802c66e7571", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-533597 event: Registered Node ingress-addon-legacy-533597 in Controller
	I1004 00:57:48.426215       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I1004 00:57:48.475466       1 shared_informer.go:230] Caches are synced for endpoint 
	I1004 00:57:48.475748       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1004 00:57:48.524363       1 shared_informer.go:230] Caches are synced for certificate-csrapproving 
	I1004 00:57:48.525934       1 shared_informer.go:230] Caches are synced for certificate-csrsigning 
	I1004 00:57:48.556907       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"b7192a5a-4c97-41cb-b6b6-1277ed054ce6", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1004 00:57:48.677893       1 shared_informer.go:230] Caches are synced for resource quota 
	I1004 00:57:48.678859       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0d53c73b-26a1-4347-96f6-56b267946f62", APIVersion:"apps/v1", ResourceVersion:"349", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-qpn7n
	I1004 00:57:48.684076       1 shared_informer.go:230] Caches are synced for resource quota 
	I1004 00:57:48.694884       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1004 00:57:48.720273       1 shared_informer.go:230] Caches are synced for persistent volume 
	I1004 00:57:48.735864       1 shared_informer.go:230] Caches are synced for attach detach 
	I1004 00:57:48.736637       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1004 00:57:48.736675       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1004 00:57:48.763889       1 shared_informer.go:230] Caches are synced for expand 
	I1004 00:57:48.785164       1 shared_informer.go:230] Caches are synced for PV protection 
	I1004 00:57:57.972393       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c3637854-af97-4a83-961e-63df2f41877c", APIVersion:"apps/v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1004 00:57:58.013161       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"55c4ddca-d5bb-414c-9efc-ed0b772274c9", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-shzbh
	I1004 00:57:58.021752       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"f6aa27d9-a490-4c30-84d6-7aeaba02b6ea", APIVersion:"batch/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-brcpb
	I1004 00:57:58.136756       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"71340b40-8e35-460f-a4c5-b024aaade8f0", APIVersion:"batch/v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-ttlpn
	I1004 00:58:02.311378       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"f6aa27d9-a490-4c30-84d6-7aeaba02b6ea", APIVersion:"batch/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1004 00:58:03.318206       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"71340b40-8e35-460f-a4c5-b024aaade8f0", APIVersion:"batch/v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1004 01:00:54.966068       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"7168fac6-16ea-4af3-b0a0-fd9aebaca9e9", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1004 01:00:54.980887       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"7f7d33c1-d9da-4008-a143-848fd04f42db", APIVersion:"apps/v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-82fh5
	
	* 
	* ==> kube-proxy [d19d9f7c97f11bb057802af03237455322358f351c64f12a03ab4149712eeeeb] <==
	* W1004 00:57:50.142186       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1004 00:57:50.149988       1 node.go:136] Successfully retrieved node IP: 192.168.39.57
	I1004 00:57:50.150054       1 server_others.go:186] Using iptables Proxier.
	I1004 00:57:50.150367       1 server.go:583] Version: v1.18.20
	I1004 00:57:50.152560       1 config.go:315] Starting service config controller
	I1004 00:57:50.159585       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1004 00:57:50.153447       1 config.go:133] Starting endpoints config controller
	I1004 00:57:50.159677       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1004 00:57:50.260086       1 shared_informer.go:230] Caches are synced for endpoints config 
	I1004 00:57:50.260333       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [b8bdcf7356c9dc2fe2766d9694a4a45866e0e69cb428ad530f4dc68d0e7e7d2a] <==
	* I1004 00:57:30.495624       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1004 00:57:30.495764       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 00:57:30.495828       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 00:57:30.495860       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1004 00:57:30.500592       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 00:57:30.501002       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 00:57:30.501102       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:57:30.501347       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 00:57:30.501438       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 00:57:30.501533       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:57:30.501878       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 00:57:30.502165       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:57:30.502819       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 00:57:30.503091       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 00:57:30.503402       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 00:57:30.503680       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:57:31.418469       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 00:57:31.421089       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 00:57:31.500778       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 00:57:31.636133       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 00:57:31.711712       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 00:57:31.833632       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1004 00:57:34.596149       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1004 00:57:48.206480       1 factory.go:503] pod: kube-system/coredns-66bff467f8-qpn7n is already present in the active queue
	E1004 00:57:48.285658       1 factory.go:503] pod: kube-system/coredns-66bff467f8-2drc5 is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 00:56:56 UTC, ends at Wed 2023-10-04 01:01:13 UTC. --
	Oct 04 00:58:03 ingress-addon-legacy-533597 kubelet[1428]: W1004 00:58:03.305071    1428 pod_container_deletor.go:77] Container "db19f4205260bd477176e0c173fec6760c24cd3ce271e6bf73b30da9bede3fd9" not found in pod's containers
	Oct 04 00:58:04 ingress-addon-legacy-533597 kubelet[1428]: W1004 00:58:04.310470    1428 pod_container_deletor.go:77] Container "e586ea44e9e46c4d60e94d753724fb315965a928d6beeed1c039ca74477e63f3" not found in pod's containers
	Oct 04 00:58:04 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:04.434117    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-admission-token-gl25f" (UniqueName: "kubernetes.io/secret/17a4b23c-278e-4ce3-a25c-af727f10bce5-ingress-nginx-admission-token-gl25f") pod "17a4b23c-278e-4ce3-a25c-af727f10bce5" (UID: "17a4b23c-278e-4ce3-a25c-af727f10bce5")
	Oct 04 00:58:04 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:04.444094    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a4b23c-278e-4ce3-a25c-af727f10bce5-ingress-nginx-admission-token-gl25f" (OuterVolumeSpecName: "ingress-nginx-admission-token-gl25f") pod "17a4b23c-278e-4ce3-a25c-af727f10bce5" (UID: "17a4b23c-278e-4ce3-a25c-af727f10bce5"). InnerVolumeSpecName "ingress-nginx-admission-token-gl25f". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 00:58:04 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:04.534550    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-gl25f" (UniqueName: "kubernetes.io/secret/17a4b23c-278e-4ce3-a25c-af727f10bce5-ingress-nginx-admission-token-gl25f") on node "ingress-addon-legacy-533597" DevicePath ""
	Oct 04 00:58:15 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:15.364740    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 04 00:58:15 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:15.470776    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-jgtzm" (UniqueName: "kubernetes.io/secret/617c590d-46d9-4edc-b168-7bb0a1fb6bf5-minikube-ingress-dns-token-jgtzm") pod "kube-ingress-dns-minikube" (UID: "617c590d-46d9-4edc-b168-7bb0a1fb6bf5")
	Oct 04 00:58:33 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:33.136000    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 04 00:58:33 ingress-addon-legacy-533597 kubelet[1428]: I1004 00:58:33.230636    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-v4fdh" (UniqueName: "kubernetes.io/secret/e2db4a68-cb2d-46e7-a035-b1952849bb0a-default-token-v4fdh") pod "nginx" (UID: "e2db4a68-cb2d-46e7-a035-b1952849bb0a")
	Oct 04 01:00:54 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:00:54.998470    1428 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Oct 04 01:00:55 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:00:55.114605    1428 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-v4fdh" (UniqueName: "kubernetes.io/secret/91b3b40d-a03d-4246-a01c-448ec71deb68-default-token-v4fdh") pod "hello-world-app-5f5d8b66bb-82fh5" (UID: "91b3b40d-a03d-4246-a01c-448ec71deb68")
	Oct 04 01:00:56 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:00:56.084678    1428 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 964c2db481cf20617611416cb56eecd21810b0a9a384f207ee388837fa63762a
	Oct 04 01:00:57 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:00:57.222732    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-jgtzm" (UniqueName: "kubernetes.io/secret/617c590d-46d9-4edc-b168-7bb0a1fb6bf5-minikube-ingress-dns-token-jgtzm") pod "617c590d-46d9-4edc-b168-7bb0a1fb6bf5" (UID: "617c590d-46d9-4edc-b168-7bb0a1fb6bf5")
	Oct 04 01:00:57 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:00:57.232543    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/617c590d-46d9-4edc-b168-7bb0a1fb6bf5-minikube-ingress-dns-token-jgtzm" (OuterVolumeSpecName: "minikube-ingress-dns-token-jgtzm") pod "617c590d-46d9-4edc-b168-7bb0a1fb6bf5" (UID: "617c590d-46d9-4edc-b168-7bb0a1fb6bf5"). InnerVolumeSpecName "minikube-ingress-dns-token-jgtzm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 01:00:57 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:00:57.323382    1428 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-jgtzm" (UniqueName: "kubernetes.io/secret/617c590d-46d9-4edc-b168-7bb0a1fb6bf5-minikube-ingress-dns-token-jgtzm") on node "ingress-addon-legacy-533597" DevicePath ""
	Oct 04 01:01:05 ingress-addon-legacy-533597 kubelet[1428]: E1004 01:01:05.121215    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-shzbh.178ac1c5d6f9c8d3", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-shzbh", UID:"2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d", APIVersion:"v1", ResourceVersion:"448", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-533597"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13f4c344705ded3, ext:211565334464, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13f4c344705ded3, ext:211565334464, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-shzbh.178ac1c5d6f9c8d3" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 04 01:01:05 ingress-addon-legacy-533597 kubelet[1428]: E1004 01:01:05.146779    1428 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-shzbh.178ac1c5d6f9c8d3", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-shzbh", UID:"2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d", APIVersion:"v1", ResourceVersion:"448", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-533597"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13f4c344705ded3, ext:211565334464, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13f4c3447dafe66, ext:211579301714, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-shzbh.178ac1c5d6f9c8d3" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 04 01:01:08 ingress-addon-legacy-533597 kubelet[1428]: W1004 01:01:08.134909    1428 pod_container_deletor.go:77] Container "ff496f2c8f9584bb4f5ed099f7d7fac384909ede2f7d86d1004621d0771fe446" not found in pod's containers
	Oct 04 01:01:09 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:01:09.163547    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-bcv6l" (UniqueName: "kubernetes.io/secret/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d-ingress-nginx-token-bcv6l") pod "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d" (UID: "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d")
	Oct 04 01:01:09 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:01:09.163579    1428 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d-webhook-cert") pod "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d" (UID: "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d")
	Oct 04 01:01:09 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:01:09.168438    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d-ingress-nginx-token-bcv6l" (OuterVolumeSpecName: "ingress-nginx-token-bcv6l") pod "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d" (UID: "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d"). InnerVolumeSpecName "ingress-nginx-token-bcv6l". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 01:01:09 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:01:09.168531    1428 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d" (UID: "2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 04 01:01:09 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:01:09.263988    1428 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d-webhook-cert") on node "ingress-addon-legacy-533597" DevicePath ""
	Oct 04 01:01:09 ingress-addon-legacy-533597 kubelet[1428]: I1004 01:01:09.264017    1428 reconciler.go:319] Volume detached for volume "ingress-nginx-token-bcv6l" (UniqueName: "kubernetes.io/secret/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d-ingress-nginx-token-bcv6l") on node "ingress-addon-legacy-533597" DevicePath ""
	Oct 04 01:01:10 ingress-addon-legacy-533597 kubelet[1428]: W1004 01:01:10.166894    1428 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/2ee03a3b-4b35-4b1e-84eb-0aa990f94a3d/volumes" does not exist
	
	* 
	* ==> storage-provisioner [7813f5aedb5cfdf95402df3244e25a7d82e2ec2129b5e18204891b2b094c75d5] <==
	* I1004 00:57:50.906812       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 00:57:50.917815       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 00:57:50.917905       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 00:57:50.930210       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 00:57:50.931015       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e14753a1-2235-4212-a849-9805e278d22d", APIVersion:"v1", ResourceVersion:"389", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-533597_416a858c-518c-464b-a397-1fde4b710306 became leader
	I1004 00:57:50.931438       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-533597_416a858c-518c-464b-a397-1fde4b710306!
	I1004 00:57:51.032370       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-533597_416a858c-518c-464b-a397-1fde4b710306!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-533597 -n ingress-addon-legacy-533597
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-533597 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (178.22s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-8g74z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-8g74z -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-8g74z -- sh -c "ping -c 1 192.168.39.1": exit status 1 (177.002886ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-8g74z): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-ckxb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-ckxb4 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-ckxb4 -- sh -c "ping -c 1 192.168.39.1": exit status 1 (178.162855ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-ckxb4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-038823 -n multinode-038823
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-038823 logs -n 25: (1.413707877s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-548527 ssh -- ls                    | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-548527 ssh --                       | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-548527                           | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	| start   | -p mount-start-2-548527                           | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC |                     |
	|         | --profile mount-start-2-548527                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-548527 ssh -- ls                    | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-548527 ssh --                       | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-548527                           | mount-start-2-548527 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	| delete  | -p mount-start-1-528023                           | mount-start-1-528023 | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:05 UTC |
	| start   | -p multinode-038823                               | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:05 UTC | 04 Oct 23 01:07 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- apply -f                   | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- rollout                    | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- get pods -o                | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- get pods -o                | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-8g74z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-ckxb4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-8g74z --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-ckxb4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-8g74z -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-ckxb4 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- get pods -o                | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-8g74z                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC |                     |
	|         | busybox-5bc68d56bd-8g74z -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC | 04 Oct 23 01:07 UTC |
	|         | busybox-5bc68d56bd-ckxb4                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-038823 -- exec                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:07 UTC |                     |
	|         | busybox-5bc68d56bd-ckxb4 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:05:37
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:05:37.418022  148021 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:05:37.418149  148021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:05:37.418157  148021 out.go:309] Setting ErrFile to fd 2...
	I1004 01:05:37.418161  148021 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:05:37.418376  148021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:05:37.418992  148021 out.go:303] Setting JSON to false
	I1004 01:05:37.419905  148021 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6489,"bootTime":1696375049,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:05:37.419974  148021 start.go:138] virtualization: kvm guest
	I1004 01:05:37.422389  148021 out.go:177] * [multinode-038823] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:05:37.424027  148021 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:05:37.424101  148021 notify.go:220] Checking for updates...
	I1004 01:05:37.425765  148021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:05:37.427656  148021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:05:37.429269  148021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:05:37.430822  148021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:05:37.432263  148021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:05:37.433998  148021 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:05:37.469648  148021 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:05:37.471006  148021 start.go:298] selected driver: kvm2
	I1004 01:05:37.471020  148021 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:05:37.471032  148021 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:05:37.471698  148021 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:05:37.471780  148021 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:05:37.486940  148021 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:05:37.487012  148021 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 01:05:37.487184  148021 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:05:37.487215  148021 cni.go:84] Creating CNI manager for ""
	I1004 01:05:37.487225  148021 cni.go:136] 0 nodes found, recommending kindnet
	I1004 01:05:37.487232  148021 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 01:05:37.487241  148021 start_flags.go:321] config:
	{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:05:37.487350  148021 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:05:37.489248  148021 out.go:177] * Starting control plane node multinode-038823 in cluster multinode-038823
	I1004 01:05:37.490532  148021 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:05:37.490566  148021 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:05:37.490577  148021 cache.go:57] Caching tarball of preloaded images
	I1004 01:05:37.490631  148021 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:05:37.490642  148021 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:05:37.490925  148021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:05:37.490947  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json: {Name:mk15cc7601abed3d3b3f8a7369a0b6e51ddc0306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:05:37.491071  148021 start.go:365] acquiring machines lock for multinode-038823: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:05:37.491098  148021 start.go:369] acquired machines lock for "multinode-038823" in 14.444µs
	I1004 01:05:37.491119  148021 start.go:93] Provisioning new machine with config: &{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:05:37.491189  148021 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 01:05:37.492873  148021 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 01:05:37.492993  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:05:37.493020  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:05:37.507370  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
	I1004 01:05:37.507820  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:05:37.508348  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:05:37.508375  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:05:37.508651  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:05:37.508795  148021 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:05:37.508889  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:05:37.509089  148021 start.go:159] libmachine.API.Create for "multinode-038823" (driver="kvm2")
	I1004 01:05:37.509112  148021 client.go:168] LocalClient.Create starting
	I1004 01:05:37.509147  148021 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 01:05:37.509186  148021 main.go:141] libmachine: Decoding PEM data...
	I1004 01:05:37.509205  148021 main.go:141] libmachine: Parsing certificate...
	I1004 01:05:37.509275  148021 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 01:05:37.509301  148021 main.go:141] libmachine: Decoding PEM data...
	I1004 01:05:37.509320  148021 main.go:141] libmachine: Parsing certificate...
	I1004 01:05:37.509347  148021 main.go:141] libmachine: Running pre-create checks...
	I1004 01:05:37.509363  148021 main.go:141] libmachine: (multinode-038823) Calling .PreCreateCheck
	I1004 01:05:37.509619  148021 main.go:141] libmachine: (multinode-038823) Calling .GetConfigRaw
	I1004 01:05:37.510000  148021 main.go:141] libmachine: Creating machine...
	I1004 01:05:37.510014  148021 main.go:141] libmachine: (multinode-038823) Calling .Create
	I1004 01:05:37.510118  148021 main.go:141] libmachine: (multinode-038823) Creating KVM machine...
	I1004 01:05:37.511401  148021 main.go:141] libmachine: (multinode-038823) DBG | found existing default KVM network
	I1004 01:05:37.512065  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:37.511938  148044 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000117350}
	I1004 01:05:37.517162  148021 main.go:141] libmachine: (multinode-038823) DBG | trying to create private KVM network mk-multinode-038823 192.168.39.0/24...
	I1004 01:05:37.587540  148021 main.go:141] libmachine: (multinode-038823) DBG | private KVM network mk-multinode-038823 192.168.39.0/24 created
	I1004 01:05:37.587581  148021 main.go:141] libmachine: (multinode-038823) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823 ...
	I1004 01:05:37.587600  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:37.587490  148044 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:05:37.587610  148021 main.go:141] libmachine: (multinode-038823) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 01:05:37.587627  148021 main.go:141] libmachine: (multinode-038823) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 01:05:37.818885  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:37.818713  148044 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa...
	I1004 01:05:37.984573  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:37.984414  148044 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/multinode-038823.rawdisk...
	I1004 01:05:37.984615  148021 main.go:141] libmachine: (multinode-038823) DBG | Writing magic tar header
	I1004 01:05:37.984631  148021 main.go:141] libmachine: (multinode-038823) DBG | Writing SSH key tar header
	I1004 01:05:37.984642  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:37.984548  148044 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823 ...
	I1004 01:05:37.984722  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823
	I1004 01:05:37.984760  148021 main.go:141] libmachine: (multinode-038823) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823 (perms=drwx------)
	I1004 01:05:37.984783  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 01:05:37.984794  148021 main.go:141] libmachine: (multinode-038823) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 01:05:37.984807  148021 main.go:141] libmachine: (multinode-038823) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 01:05:37.984822  148021 main.go:141] libmachine: (multinode-038823) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 01:05:37.984836  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:05:37.984853  148021 main.go:141] libmachine: (multinode-038823) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 01:05:37.984863  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 01:05:37.984870  148021 main.go:141] libmachine: (multinode-038823) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 01:05:37.984879  148021 main.go:141] libmachine: (multinode-038823) Creating domain...
	I1004 01:05:37.984892  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 01:05:37.984908  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home/jenkins
	I1004 01:05:37.984920  148021 main.go:141] libmachine: (multinode-038823) DBG | Checking permissions on dir: /home
	I1004 01:05:37.984933  148021 main.go:141] libmachine: (multinode-038823) DBG | Skipping /home - not owner
	I1004 01:05:37.986029  148021 main.go:141] libmachine: (multinode-038823) define libvirt domain using xml: 
	I1004 01:05:37.986050  148021 main.go:141] libmachine: (multinode-038823) <domain type='kvm'>
	I1004 01:05:37.986058  148021 main.go:141] libmachine: (multinode-038823)   <name>multinode-038823</name>
	I1004 01:05:37.986064  148021 main.go:141] libmachine: (multinode-038823)   <memory unit='MiB'>2200</memory>
	I1004 01:05:37.986074  148021 main.go:141] libmachine: (multinode-038823)   <vcpu>2</vcpu>
	I1004 01:05:37.986090  148021 main.go:141] libmachine: (multinode-038823)   <features>
	I1004 01:05:37.986101  148021 main.go:141] libmachine: (multinode-038823)     <acpi/>
	I1004 01:05:37.986109  148021 main.go:141] libmachine: (multinode-038823)     <apic/>
	I1004 01:05:37.986120  148021 main.go:141] libmachine: (multinode-038823)     <pae/>
	I1004 01:05:37.986133  148021 main.go:141] libmachine: (multinode-038823)     
	I1004 01:05:37.986142  148021 main.go:141] libmachine: (multinode-038823)   </features>
	I1004 01:05:37.986147  148021 main.go:141] libmachine: (multinode-038823)   <cpu mode='host-passthrough'>
	I1004 01:05:37.986153  148021 main.go:141] libmachine: (multinode-038823)   
	I1004 01:05:37.986160  148021 main.go:141] libmachine: (multinode-038823)   </cpu>
	I1004 01:05:37.986166  148021 main.go:141] libmachine: (multinode-038823)   <os>
	I1004 01:05:37.986176  148021 main.go:141] libmachine: (multinode-038823)     <type>hvm</type>
	I1004 01:05:37.986182  148021 main.go:141] libmachine: (multinode-038823)     <boot dev='cdrom'/>
	I1004 01:05:37.986190  148021 main.go:141] libmachine: (multinode-038823)     <boot dev='hd'/>
	I1004 01:05:37.986196  148021 main.go:141] libmachine: (multinode-038823)     <bootmenu enable='no'/>
	I1004 01:05:37.986203  148021 main.go:141] libmachine: (multinode-038823)   </os>
	I1004 01:05:37.986209  148021 main.go:141] libmachine: (multinode-038823)   <devices>
	I1004 01:05:37.986217  148021 main.go:141] libmachine: (multinode-038823)     <disk type='file' device='cdrom'>
	I1004 01:05:37.986272  148021 main.go:141] libmachine: (multinode-038823)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/boot2docker.iso'/>
	I1004 01:05:37.986309  148021 main.go:141] libmachine: (multinode-038823)       <target dev='hdc' bus='scsi'/>
	I1004 01:05:37.986333  148021 main.go:141] libmachine: (multinode-038823)       <readonly/>
	I1004 01:05:37.986353  148021 main.go:141] libmachine: (multinode-038823)     </disk>
	I1004 01:05:37.986369  148021 main.go:141] libmachine: (multinode-038823)     <disk type='file' device='disk'>
	I1004 01:05:37.986384  148021 main.go:141] libmachine: (multinode-038823)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 01:05:37.986404  148021 main.go:141] libmachine: (multinode-038823)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/multinode-038823.rawdisk'/>
	I1004 01:05:37.986417  148021 main.go:141] libmachine: (multinode-038823)       <target dev='hda' bus='virtio'/>
	I1004 01:05:37.986430  148021 main.go:141] libmachine: (multinode-038823)     </disk>
	I1004 01:05:37.986444  148021 main.go:141] libmachine: (multinode-038823)     <interface type='network'>
	I1004 01:05:37.986459  148021 main.go:141] libmachine: (multinode-038823)       <source network='mk-multinode-038823'/>
	I1004 01:05:37.986473  148021 main.go:141] libmachine: (multinode-038823)       <model type='virtio'/>
	I1004 01:05:37.986484  148021 main.go:141] libmachine: (multinode-038823)     </interface>
	I1004 01:05:37.986491  148021 main.go:141] libmachine: (multinode-038823)     <interface type='network'>
	I1004 01:05:37.986500  148021 main.go:141] libmachine: (multinode-038823)       <source network='default'/>
	I1004 01:05:37.986506  148021 main.go:141] libmachine: (multinode-038823)       <model type='virtio'/>
	I1004 01:05:37.986514  148021 main.go:141] libmachine: (multinode-038823)     </interface>
	I1004 01:05:37.986520  148021 main.go:141] libmachine: (multinode-038823)     <serial type='pty'>
	I1004 01:05:37.986528  148021 main.go:141] libmachine: (multinode-038823)       <target port='0'/>
	I1004 01:05:37.986534  148021 main.go:141] libmachine: (multinode-038823)     </serial>
	I1004 01:05:37.986541  148021 main.go:141] libmachine: (multinode-038823)     <console type='pty'>
	I1004 01:05:37.986548  148021 main.go:141] libmachine: (multinode-038823)       <target type='serial' port='0'/>
	I1004 01:05:37.986555  148021 main.go:141] libmachine: (multinode-038823)     </console>
	I1004 01:05:37.986561  148021 main.go:141] libmachine: (multinode-038823)     <rng model='virtio'>
	I1004 01:05:37.986572  148021 main.go:141] libmachine: (multinode-038823)       <backend model='random'>/dev/random</backend>
	I1004 01:05:37.986594  148021 main.go:141] libmachine: (multinode-038823)     </rng>
	I1004 01:05:37.986613  148021 main.go:141] libmachine: (multinode-038823)     
	I1004 01:05:37.986633  148021 main.go:141] libmachine: (multinode-038823)     
	I1004 01:05:37.986645  148021 main.go:141] libmachine: (multinode-038823)   </devices>
	I1004 01:05:37.986656  148021 main.go:141] libmachine: (multinode-038823) </domain>
	I1004 01:05:37.986663  148021 main.go:141] libmachine: (multinode-038823) 
	I1004 01:05:37.992639  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:de:13:7a in network default
	I1004 01:05:37.993260  148021 main.go:141] libmachine: (multinode-038823) Ensuring networks are active...
	I1004 01:05:37.993281  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:37.994009  148021 main.go:141] libmachine: (multinode-038823) Ensuring network default is active
	I1004 01:05:37.994451  148021 main.go:141] libmachine: (multinode-038823) Ensuring network mk-multinode-038823 is active
	I1004 01:05:37.995201  148021 main.go:141] libmachine: (multinode-038823) Getting domain xml...
	I1004 01:05:37.996011  148021 main.go:141] libmachine: (multinode-038823) Creating domain...
	I1004 01:05:39.210992  148021 main.go:141] libmachine: (multinode-038823) Waiting to get IP...
	I1004 01:05:39.211787  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:39.212196  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:39.212225  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:39.212178  148044 retry.go:31] will retry after 240.481763ms: waiting for machine to come up
	I1004 01:05:39.454688  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:39.455154  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:39.455184  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:39.455121  148044 retry.go:31] will retry after 312.124934ms: waiting for machine to come up
	I1004 01:05:39.768612  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:39.769089  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:39.769131  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:39.769057  148044 retry.go:31] will retry after 416.990213ms: waiting for machine to come up
	I1004 01:05:40.187633  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:40.188102  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:40.188125  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:40.188065  148044 retry.go:31] will retry after 560.704597ms: waiting for machine to come up
	I1004 01:05:40.750743  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:40.751177  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:40.751203  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:40.751124  148044 retry.go:31] will retry after 598.210815ms: waiting for machine to come up
	I1004 01:05:41.350883  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:41.351287  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:41.351322  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:41.351256  148044 retry.go:31] will retry after 701.533324ms: waiting for machine to come up
	I1004 01:05:42.054009  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:42.054426  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:42.054459  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:42.054372  148044 retry.go:31] will retry after 769.309549ms: waiting for machine to come up
	I1004 01:05:42.824849  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:42.825323  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:42.825345  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:42.825279  148044 retry.go:31] will retry after 1.471721901s: waiting for machine to come up
	I1004 01:05:44.298203  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:44.298572  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:44.298607  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:44.298513  148044 retry.go:31] will retry after 1.245557842s: waiting for machine to come up
	I1004 01:05:45.545981  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:45.546474  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:45.546501  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:45.546421  148044 retry.go:31] will retry after 1.706913703s: waiting for machine to come up
	I1004 01:05:47.254941  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:47.255368  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:47.255402  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:47.255301  148044 retry.go:31] will retry after 2.396703566s: waiting for machine to come up
	I1004 01:05:49.655004  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:49.655478  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:49.655511  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:49.655439  148044 retry.go:31] will retry after 3.545320069s: waiting for machine to come up
	I1004 01:05:53.202734  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:53.203067  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:53.203103  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:53.203022  148044 retry.go:31] will retry after 3.600430994s: waiting for machine to come up
	I1004 01:05:56.807747  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:05:56.808195  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:05:56.808232  148021 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:05:56.808155  148044 retry.go:31] will retry after 4.946846705s: waiting for machine to come up
	I1004 01:06:01.759922  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:01.760355  148021 main.go:141] libmachine: (multinode-038823) Found IP for machine: 192.168.39.212
	I1004 01:06:01.760375  148021 main.go:141] libmachine: (multinode-038823) Reserving static IP address...
	I1004 01:06:01.760392  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has current primary IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:01.760773  148021 main.go:141] libmachine: (multinode-038823) DBG | unable to find host DHCP lease matching {name: "multinode-038823", mac: "52:54:00:89:cd:1c", ip: "192.168.39.212"} in network mk-multinode-038823
	I1004 01:06:01.833417  148021 main.go:141] libmachine: (multinode-038823) DBG | Getting to WaitForSSH function...
	I1004 01:06:01.833453  148021 main.go:141] libmachine: (multinode-038823) Reserved static IP address: 192.168.39.212
	I1004 01:06:01.833478  148021 main.go:141] libmachine: (multinode-038823) Waiting for SSH to be available...
	I1004 01:06:01.836092  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:01.836490  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:minikube Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:01.836591  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:01.836764  148021 main.go:141] libmachine: (multinode-038823) DBG | Using SSH client type: external
	I1004 01:06:01.836790  148021 main.go:141] libmachine: (multinode-038823) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa (-rw-------)
	I1004 01:06:01.836813  148021 main.go:141] libmachine: (multinode-038823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:06:01.836824  148021 main.go:141] libmachine: (multinode-038823) DBG | About to run SSH command:
	I1004 01:06:01.836839  148021 main.go:141] libmachine: (multinode-038823) DBG | exit 0
	I1004 01:06:01.930005  148021 main.go:141] libmachine: (multinode-038823) DBG | SSH cmd err, output: <nil>: 
	I1004 01:06:01.930280  148021 main.go:141] libmachine: (multinode-038823) KVM machine creation complete!
	I1004 01:06:01.930584  148021 main.go:141] libmachine: (multinode-038823) Calling .GetConfigRaw
	I1004 01:06:01.931145  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:01.931326  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:01.931471  148021 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 01:06:01.931484  148021 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:06:01.932609  148021 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 01:06:01.932630  148021 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 01:06:01.932639  148021 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 01:06:01.932650  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:01.934911  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:01.935272  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:01.935308  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:01.935384  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:01.935556  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:01.935741  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:01.935926  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:01.936138  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:06:01.936503  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:06:01.936518  148021 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 01:06:02.061175  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:06:02.061199  148021 main.go:141] libmachine: Detecting the provisioner...
	I1004 01:06:02.061208  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:02.063952  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.064301  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.064340  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.064514  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:02.064739  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.064888  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.065029  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:02.065181  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:06:02.065538  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:06:02.065551  148021 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 01:06:02.190752  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 01:06:02.190841  148021 main.go:141] libmachine: found compatible host: buildroot
	I1004 01:06:02.190854  148021 main.go:141] libmachine: Provisioning with buildroot...
	I1004 01:06:02.190872  148021 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:06:02.191171  148021 buildroot.go:166] provisioning hostname "multinode-038823"
	I1004 01:06:02.191201  148021 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:06:02.191404  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:02.193998  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.194417  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.194448  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.194630  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:02.194807  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.194933  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.195042  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:02.195184  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:06:02.195562  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:06:02.195578  148021 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-038823 && echo "multinode-038823" | sudo tee /etc/hostname
	I1004 01:06:02.330849  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-038823
	
	I1004 01:06:02.330881  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:02.333662  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.334086  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.334120  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.334214  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:02.334400  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.334549  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.334699  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:02.334823  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:06:02.335143  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:06:02.335163  148021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-038823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-038823/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-038823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:06:02.466554  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:06:02.466592  148021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:06:02.466643  148021 buildroot.go:174] setting up certificates
	I1004 01:06:02.466657  148021 provision.go:83] configureAuth start
	I1004 01:06:02.466677  148021 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:06:02.467020  148021 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:06:02.469481  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.469798  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.469825  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.469934  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:02.472157  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.472465  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.472493  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.472622  148021 provision.go:138] copyHostCerts
	I1004 01:06:02.472662  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:06:02.472694  148021 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:06:02.472704  148021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:06:02.472755  148021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:06:02.472827  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:06:02.472844  148021 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:06:02.472849  148021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:06:02.472870  148021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:06:02.472909  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:06:02.472925  148021 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:06:02.472931  148021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:06:02.472948  148021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:06:02.472990  148021 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.multinode-038823 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube multinode-038823]
	I1004 01:06:02.559822  148021 provision.go:172] copyRemoteCerts
	I1004 01:06:02.559887  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:06:02.559913  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:02.562566  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.562863  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.562889  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.563103  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:02.563305  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.563464  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:02.563628  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:06:02.654850  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 01:06:02.654934  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:06:02.678709  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 01:06:02.678788  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 01:06:02.702226  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 01:06:02.702293  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 01:06:02.726278  148021 provision.go:86] duration metric: configureAuth took 259.602026ms
	I1004 01:06:02.726307  148021 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:06:02.726464  148021 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:06:02.726539  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:02.729226  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.729623  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:02.729657  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:02.729913  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:02.730128  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.730358  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:02.730494  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:02.730667  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:06:02.731114  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:06:02.731151  148021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:06:03.040258  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:06:03.040291  148021 main.go:141] libmachine: Checking connection to Docker...
	I1004 01:06:03.040300  148021 main.go:141] libmachine: (multinode-038823) Calling .GetURL
	I1004 01:06:03.041515  148021 main.go:141] libmachine: (multinode-038823) DBG | Using libvirt version 6000000
	I1004 01:06:03.043541  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.043850  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.043882  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.044014  148021 main.go:141] libmachine: Docker is up and running!
	I1004 01:06:03.044032  148021 main.go:141] libmachine: Reticulating splines...
	I1004 01:06:03.044039  148021 client.go:171] LocalClient.Create took 25.534919306s
	I1004 01:06:03.044062  148021 start.go:167] duration metric: libmachine.API.Create for "multinode-038823" took 25.53497485s
	I1004 01:06:03.044072  148021 start.go:300] post-start starting for "multinode-038823" (driver="kvm2")
	I1004 01:06:03.044081  148021 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:06:03.044099  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:03.044347  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:06:03.044373  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:03.046558  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.046923  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.046963  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.047052  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:03.047225  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:03.047392  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:03.047527  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:06:03.141283  148021 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:06:03.145864  148021 command_runner.go:130] > NAME=Buildroot
	I1004 01:06:03.145888  148021 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1004 01:06:03.145896  148021 command_runner.go:130] > ID=buildroot
	I1004 01:06:03.145904  148021 command_runner.go:130] > VERSION_ID=2021.02.12
	I1004 01:06:03.145918  148021 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1004 01:06:03.145958  148021 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:06:03.145976  148021 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:06:03.146062  148021 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:06:03.146169  148021 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:06:03.146179  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /etc/ssl/certs/1355652.pem
	I1004 01:06:03.146265  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:06:03.154896  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:06:03.176975  148021 start.go:303] post-start completed in 132.889444ms
	I1004 01:06:03.177035  148021 main.go:141] libmachine: (multinode-038823) Calling .GetConfigRaw
	I1004 01:06:03.177670  148021 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:06:03.180251  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.180639  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.180666  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.180906  148021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:06:03.181126  148021 start.go:128] duration metric: createHost completed in 25.68992616s
	I1004 01:06:03.181165  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:03.183302  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.183615  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.183645  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.183769  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:03.183965  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:03.184093  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:03.184199  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:03.184330  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:06:03.184647  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:06:03.184658  148021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:06:03.306793  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696381563.277914838
	
	I1004 01:06:03.306821  148021 fix.go:206] guest clock: 1696381563.277914838
	I1004 01:06:03.306832  148021 fix.go:219] Guest: 2023-10-04 01:06:03.277914838 +0000 UTC Remote: 2023-10-04 01:06:03.181152037 +0000 UTC m=+25.793795694 (delta=96.762801ms)
	I1004 01:06:03.306856  148021 fix.go:190] guest clock delta is within tolerance: 96.762801ms
	I1004 01:06:03.306862  148021 start.go:83] releasing machines lock for "multinode-038823", held for 25.81575569s
	I1004 01:06:03.306881  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:03.307160  148021 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:06:03.309644  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.310040  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.310071  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.310207  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:03.310734  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:03.310934  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:03.311029  148021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:06:03.311088  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:03.311163  148021 ssh_runner.go:195] Run: cat /version.json
	I1004 01:06:03.311186  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:03.313469  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.313611  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.313868  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.313909  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.313941  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:03.313961  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:03.314139  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:03.314150  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:03.314324  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:03.314331  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:03.314481  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:03.314490  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:03.314621  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:06:03.314627  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:06:03.402537  148021 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1004 01:06:03.423811  148021 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 01:06:03.424681  148021 ssh_runner.go:195] Run: systemctl --version
	I1004 01:06:03.430409  148021 command_runner.go:130] > systemd 247 (247)
	I1004 01:06:03.430442  148021 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1004 01:06:03.430512  148021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:06:03.591859  148021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 01:06:03.597973  148021 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 01:06:03.598109  148021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:06:03.598179  148021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:06:03.613981  148021 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1004 01:06:03.614305  148021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:06:03.614325  148021 start.go:469] detecting cgroup driver to use...
	I1004 01:06:03.614381  148021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:06:03.629915  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:06:03.643761  148021 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:06:03.643849  148021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:06:03.657450  148021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:06:03.670822  148021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:06:03.781743  148021 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1004 01:06:03.781820  148021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:06:03.895703  148021 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1004 01:06:03.895739  148021 docker.go:213] disabling docker service ...
	I1004 01:06:03.895795  148021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:06:03.908567  148021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:06:03.919670  148021 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1004 01:06:03.919808  148021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:06:03.932565  148021 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1004 01:06:04.027301  148021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:06:04.039559  148021 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1004 01:06:04.039880  148021 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1004 01:06:04.126517  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:06:04.138978  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:06:04.155774  148021 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 01:06:04.156098  148021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:06:04.156159  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:06:04.164939  148021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:06:04.165012  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:06:04.174146  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:06:04.183188  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:06:04.192067  148021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:06:04.201211  148021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:06:04.209154  148021 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:06:04.209340  148021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:06:04.209406  148021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:06:04.221985  148021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:06:04.231211  148021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:06:04.339890  148021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:06:04.506548  148021 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:06:04.506640  148021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:06:04.512079  148021 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 01:06:04.512107  148021 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 01:06:04.512134  148021 command_runner.go:130] > Device: 16h/22d	Inode: 741         Links: 1
	I1004 01:06:04.512145  148021 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:06:04.512153  148021 command_runner.go:130] > Access: 2023-10-04 01:06:04.465502698 +0000
	I1004 01:06:04.512164  148021 command_runner.go:130] > Modify: 2023-10-04 01:06:04.465502698 +0000
	I1004 01:06:04.512172  148021 command_runner.go:130] > Change: 2023-10-04 01:06:04.465502698 +0000
	I1004 01:06:04.512179  148021 command_runner.go:130] >  Birth: -
	I1004 01:06:04.512210  148021 start.go:537] Will wait 60s for crictl version
	I1004 01:06:04.512261  148021 ssh_runner.go:195] Run: which crictl
	I1004 01:06:04.516379  148021 command_runner.go:130] > /usr/bin/crictl
	I1004 01:06:04.516444  148021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:06:04.559093  148021 command_runner.go:130] > Version:  0.1.0
	I1004 01:06:04.559117  148021 command_runner.go:130] > RuntimeName:  cri-o
	I1004 01:06:04.559122  148021 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1004 01:06:04.559126  148021 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 01:06:04.559172  148021 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:06:04.559262  148021 ssh_runner.go:195] Run: crio --version
	I1004 01:06:04.608190  148021 command_runner.go:130] > crio version 1.24.1
	I1004 01:06:04.608219  148021 command_runner.go:130] > Version:          1.24.1
	I1004 01:06:04.608230  148021 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:06:04.608237  148021 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:06:04.608247  148021 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:06:04.608254  148021 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:06:04.608262  148021 command_runner.go:130] > Compiler:         gc
	I1004 01:06:04.608276  148021 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:06:04.608286  148021 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:06:04.608300  148021 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:06:04.608307  148021 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:06:04.608312  148021 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:06:04.608387  148021 ssh_runner.go:195] Run: crio --version
	I1004 01:06:04.654985  148021 command_runner.go:130] > crio version 1.24.1
	I1004 01:06:04.655014  148021 command_runner.go:130] > Version:          1.24.1
	I1004 01:06:04.655026  148021 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:06:04.655033  148021 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:06:04.655042  148021 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:06:04.655050  148021 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:06:04.655056  148021 command_runner.go:130] > Compiler:         gc
	I1004 01:06:04.655078  148021 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:06:04.655092  148021 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:06:04.655102  148021 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:06:04.655113  148021 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:06:04.655120  148021 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:06:04.658290  148021 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:06:04.659756  148021 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:06:04.662443  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:04.662783  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:04.662821  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:04.662987  148021 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 01:06:04.667171  148021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:06:04.680026  148021 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:06:04.680102  148021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:06:04.713530  148021 command_runner.go:130] > {
	I1004 01:06:04.713582  148021 command_runner.go:130] >   "images": [
	I1004 01:06:04.713590  148021 command_runner.go:130] >   ]
	I1004 01:06:04.713596  148021 command_runner.go:130] > }
	I1004 01:06:04.713711  148021 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 01:06:04.713790  148021 ssh_runner.go:195] Run: which lz4
	I1004 01:06:04.717796  148021 command_runner.go:130] > /usr/bin/lz4
	I1004 01:06:04.717831  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 01:06:04.717944  148021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:06:04.721929  148021 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:06:04.722193  148021 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:06:04.722219  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 01:06:06.539088  148021 crio.go:444] Took 1.821176 seconds to copy over tarball
	I1004 01:06:06.539174  148021 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:06:09.321405  148021 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.7821964s)
	I1004 01:06:09.321448  148021 crio.go:451] Took 2.782323 seconds to extract the tarball
	I1004 01:06:09.321460  148021 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:06:09.361928  148021 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:06:09.425556  148021 command_runner.go:130] > {
	I1004 01:06:09.425580  148021 command_runner.go:130] >   "images": [
	I1004 01:06:09.425584  148021 command_runner.go:130] >     {
	I1004 01:06:09.425592  148021 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1004 01:06:09.425597  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.425603  148021 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1004 01:06:09.425607  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425611  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.425619  148021 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1004 01:06:09.425626  148021 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1004 01:06:09.425630  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425645  148021 command_runner.go:130] >       "size": "65258016",
	I1004 01:06:09.425650  148021 command_runner.go:130] >       "uid": null,
	I1004 01:06:09.425653  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.425661  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.425666  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.425670  148021 command_runner.go:130] >     },
	I1004 01:06:09.425674  148021 command_runner.go:130] >     {
	I1004 01:06:09.425682  148021 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1004 01:06:09.425686  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.425692  148021 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1004 01:06:09.425699  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425704  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.425712  148021 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1004 01:06:09.425726  148021 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1004 01:06:09.425730  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425737  148021 command_runner.go:130] >       "size": "31470524",
	I1004 01:06:09.425741  148021 command_runner.go:130] >       "uid": null,
	I1004 01:06:09.425748  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.425752  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.425757  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.425761  148021 command_runner.go:130] >     },
	I1004 01:06:09.425765  148021 command_runner.go:130] >     {
	I1004 01:06:09.425771  148021 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1004 01:06:09.425775  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.425780  148021 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1004 01:06:09.425787  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425791  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.425800  148021 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1004 01:06:09.425808  148021 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1004 01:06:09.425814  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425819  148021 command_runner.go:130] >       "size": "53621675",
	I1004 01:06:09.425823  148021 command_runner.go:130] >       "uid": null,
	I1004 01:06:09.425827  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.425832  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.425836  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.425853  148021 command_runner.go:130] >     },
	I1004 01:06:09.425857  148021 command_runner.go:130] >     {
	I1004 01:06:09.425863  148021 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1004 01:06:09.425868  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.425873  148021 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1004 01:06:09.425877  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425881  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.425888  148021 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1004 01:06:09.425896  148021 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1004 01:06:09.425907  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425914  148021 command_runner.go:130] >       "size": "295456551",
	I1004 01:06:09.425918  148021 command_runner.go:130] >       "uid": {
	I1004 01:06:09.425926  148021 command_runner.go:130] >         "value": "0"
	I1004 01:06:09.425930  148021 command_runner.go:130] >       },
	I1004 01:06:09.425935  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.425942  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.425946  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.425952  148021 command_runner.go:130] >     },
	I1004 01:06:09.425956  148021 command_runner.go:130] >     {
	I1004 01:06:09.425962  148021 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I1004 01:06:09.425968  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.425974  148021 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1004 01:06:09.425979  148021 command_runner.go:130] >       ],
	I1004 01:06:09.425983  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.425993  148021 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I1004 01:06:09.426002  148021 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1004 01:06:09.426008  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426013  148021 command_runner.go:130] >       "size": "127149008",
	I1004 01:06:09.426019  148021 command_runner.go:130] >       "uid": {
	I1004 01:06:09.426023  148021 command_runner.go:130] >         "value": "0"
	I1004 01:06:09.426029  148021 command_runner.go:130] >       },
	I1004 01:06:09.426033  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.426040  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.426044  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.426048  148021 command_runner.go:130] >     },
	I1004 01:06:09.426052  148021 command_runner.go:130] >     {
	I1004 01:06:09.426058  148021 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I1004 01:06:09.426064  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.426070  148021 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1004 01:06:09.426076  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426080  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.426090  148021 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I1004 01:06:09.426100  148021 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I1004 01:06:09.426106  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426111  148021 command_runner.go:130] >       "size": "123171638",
	I1004 01:06:09.426118  148021 command_runner.go:130] >       "uid": {
	I1004 01:06:09.426122  148021 command_runner.go:130] >         "value": "0"
	I1004 01:06:09.426126  148021 command_runner.go:130] >       },
	I1004 01:06:09.426131  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.426138  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.426142  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.426148  148021 command_runner.go:130] >     },
	I1004 01:06:09.426151  148021 command_runner.go:130] >     {
	I1004 01:06:09.426160  148021 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I1004 01:06:09.426166  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.426171  148021 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1004 01:06:09.426176  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426181  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.426190  148021 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I1004 01:06:09.426199  148021 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I1004 01:06:09.426205  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426210  148021 command_runner.go:130] >       "size": "74687895",
	I1004 01:06:09.426216  148021 command_runner.go:130] >       "uid": null,
	I1004 01:06:09.426220  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.426225  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.426230  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.426234  148021 command_runner.go:130] >     },
	I1004 01:06:09.426240  148021 command_runner.go:130] >     {
	I1004 01:06:09.426246  148021 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I1004 01:06:09.426252  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.426258  148021 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1004 01:06:09.426263  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426268  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.426305  148021 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1004 01:06:09.426316  148021 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I1004 01:06:09.426319  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426323  148021 command_runner.go:130] >       "size": "61485878",
	I1004 01:06:09.426327  148021 command_runner.go:130] >       "uid": {
	I1004 01:06:09.426335  148021 command_runner.go:130] >         "value": "0"
	I1004 01:06:09.426338  148021 command_runner.go:130] >       },
	I1004 01:06:09.426345  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.426350  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.426356  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.426362  148021 command_runner.go:130] >     },
	I1004 01:06:09.426368  148021 command_runner.go:130] >     {
	I1004 01:06:09.426374  148021 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1004 01:06:09.426381  148021 command_runner.go:130] >       "repoTags": [
	I1004 01:06:09.426385  148021 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1004 01:06:09.426392  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426396  148021 command_runner.go:130] >       "repoDigests": [
	I1004 01:06:09.426406  148021 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1004 01:06:09.426415  148021 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1004 01:06:09.426421  148021 command_runner.go:130] >       ],
	I1004 01:06:09.426425  148021 command_runner.go:130] >       "size": "750414",
	I1004 01:06:09.426432  148021 command_runner.go:130] >       "uid": {
	I1004 01:06:09.426436  148021 command_runner.go:130] >         "value": "65535"
	I1004 01:06:09.426442  148021 command_runner.go:130] >       },
	I1004 01:06:09.426447  148021 command_runner.go:130] >       "username": "",
	I1004 01:06:09.426453  148021 command_runner.go:130] >       "spec": null,
	I1004 01:06:09.426457  148021 command_runner.go:130] >       "pinned": false
	I1004 01:06:09.426463  148021 command_runner.go:130] >     }
	I1004 01:06:09.426466  148021 command_runner.go:130] >   ]
	I1004 01:06:09.426470  148021 command_runner.go:130] > }
	I1004 01:06:09.427855  148021 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:06:09.427878  148021 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:06:09.427945  148021 ssh_runner.go:195] Run: crio config
	I1004 01:06:09.483128  148021 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 01:06:09.483162  148021 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 01:06:09.483169  148021 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 01:06:09.483172  148021 command_runner.go:130] > #
	I1004 01:06:09.483179  148021 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 01:06:09.483185  148021 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 01:06:09.483192  148021 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 01:06:09.483198  148021 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 01:06:09.483202  148021 command_runner.go:130] > # reload'.
	I1004 01:06:09.483211  148021 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 01:06:09.483220  148021 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 01:06:09.483229  148021 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 01:06:09.483240  148021 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 01:06:09.483245  148021 command_runner.go:130] > [crio]
	I1004 01:06:09.483255  148021 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 01:06:09.483264  148021 command_runner.go:130] > # containers images, in this directory.
	I1004 01:06:09.483290  148021 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 01:06:09.483303  148021 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 01:06:09.483308  148021 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 01:06:09.483314  148021 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 01:06:09.483325  148021 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 01:06:09.483335  148021 command_runner.go:130] > storage_driver = "overlay"
	I1004 01:06:09.483346  148021 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 01:06:09.483359  148021 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 01:06:09.483366  148021 command_runner.go:130] > storage_option = [
	I1004 01:06:09.483394  148021 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 01:06:09.483588  148021 command_runner.go:130] > ]
	I1004 01:06:09.483612  148021 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 01:06:09.483622  148021 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 01:06:09.484178  148021 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 01:06:09.484197  148021 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 01:06:09.484208  148021 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 01:06:09.484215  148021 command_runner.go:130] > # always happen on a node reboot
	I1004 01:06:09.484983  148021 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 01:06:09.485000  148021 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 01:06:09.485010  148021 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 01:06:09.485024  148021 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 01:06:09.485551  148021 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1004 01:06:09.485574  148021 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 01:06:09.485588  148021 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 01:06:09.486260  148021 command_runner.go:130] > # internal_wipe = true
	I1004 01:06:09.486284  148021 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 01:06:09.486294  148021 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 01:06:09.486307  148021 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 01:06:09.486977  148021 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 01:06:09.486989  148021 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 01:06:09.486993  148021 command_runner.go:130] > [crio.api]
	I1004 01:06:09.486999  148021 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 01:06:09.487178  148021 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 01:06:09.487192  148021 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 01:06:09.487225  148021 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 01:06:09.487244  148021 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 01:06:09.487254  148021 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 01:06:09.487261  148021 command_runner.go:130] > # stream_port = "0"
	I1004 01:06:09.487266  148021 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 01:06:09.487273  148021 command_runner.go:130] > # stream_enable_tls = false
	I1004 01:06:09.487280  148021 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 01:06:09.487287  148021 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 01:06:09.487296  148021 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 01:06:09.487310  148021 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 01:06:09.487320  148021 command_runner.go:130] > # minutes.
	I1004 01:06:09.487427  148021 command_runner.go:130] > # stream_tls_cert = ""
	I1004 01:06:09.487440  148021 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 01:06:09.487450  148021 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 01:06:09.487475  148021 command_runner.go:130] > # stream_tls_key = ""
	I1004 01:06:09.487488  148021 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 01:06:09.487502  148021 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 01:06:09.487514  148021 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 01:06:09.487519  148021 command_runner.go:130] > # stream_tls_ca = ""
	I1004 01:06:09.487527  148021 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:06:09.487534  148021 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 01:06:09.487541  148021 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:06:09.487548  148021 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 01:06:09.487565  148021 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 01:06:09.487578  148021 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 01:06:09.487588  148021 command_runner.go:130] > [crio.runtime]
	I1004 01:06:09.487602  148021 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 01:06:09.487614  148021 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 01:06:09.487624  148021 command_runner.go:130] > # "nofile=1024:2048"
	I1004 01:06:09.487639  148021 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 01:06:09.487650  148021 command_runner.go:130] > # default_ulimits = [
	I1004 01:06:09.487658  148021 command_runner.go:130] > # ]
	I1004 01:06:09.487669  148021 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 01:06:09.487680  148021 command_runner.go:130] > # no_pivot = false
	I1004 01:06:09.487689  148021 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 01:06:09.487703  148021 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 01:06:09.487714  148021 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 01:06:09.487727  148021 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 01:06:09.487735  148021 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 01:06:09.487746  148021 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:06:09.487776  148021 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 01:06:09.487787  148021 command_runner.go:130] > # Cgroup setting for conmon
	I1004 01:06:09.487801  148021 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 01:06:09.487811  148021 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 01:06:09.487823  148021 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 01:06:09.487829  148021 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 01:06:09.487840  148021 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:06:09.487851  148021 command_runner.go:130] > conmon_env = [
	I1004 01:06:09.487861  148021 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 01:06:09.487871  148021 command_runner.go:130] > ]
	I1004 01:06:09.487880  148021 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 01:06:09.487892  148021 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 01:06:09.487903  148021 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 01:06:09.487914  148021 command_runner.go:130] > # default_env = [
	I1004 01:06:09.487936  148021 command_runner.go:130] > # ]
	I1004 01:06:09.487950  148021 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 01:06:09.487960  148021 command_runner.go:130] > # selinux = false
	I1004 01:06:09.487971  148021 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 01:06:09.487985  148021 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 01:06:09.487998  148021 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 01:06:09.488008  148021 command_runner.go:130] > # seccomp_profile = ""
	I1004 01:06:09.488021  148021 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 01:06:09.488035  148021 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 01:06:09.488046  148021 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 01:06:09.488057  148021 command_runner.go:130] > # which might increase security.
	I1004 01:06:09.488069  148021 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 01:06:09.488083  148021 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 01:06:09.488096  148021 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 01:06:09.488108  148021 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 01:06:09.488122  148021 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 01:06:09.488143  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:06:09.488155  148021 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 01:06:09.488168  148021 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 01:06:09.488178  148021 command_runner.go:130] > # the cgroup blockio controller.
	I1004 01:06:09.488204  148021 command_runner.go:130] > # blockio_config_file = ""
	I1004 01:06:09.488217  148021 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 01:06:09.488227  148021 command_runner.go:130] > # irqbalance daemon.
	I1004 01:06:09.488240  148021 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 01:06:09.488254  148021 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 01:06:09.488266  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:06:09.488275  148021 command_runner.go:130] > # rdt_config_file = ""
	I1004 01:06:09.488285  148021 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 01:06:09.488293  148021 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 01:06:09.488300  148021 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 01:06:09.488311  148021 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 01:06:09.488322  148021 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 01:06:09.488338  148021 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 01:06:09.488348  148021 command_runner.go:130] > # will be added.
	I1004 01:06:09.488356  148021 command_runner.go:130] > # default_capabilities = [
	I1004 01:06:09.488366  148021 command_runner.go:130] > # 	"CHOWN",
	I1004 01:06:09.488376  148021 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 01:06:09.488383  148021 command_runner.go:130] > # 	"FSETID",
	I1004 01:06:09.488394  148021 command_runner.go:130] > # 	"FOWNER",
	I1004 01:06:09.488408  148021 command_runner.go:130] > # 	"SETGID",
	I1004 01:06:09.488418  148021 command_runner.go:130] > # 	"SETUID",
	I1004 01:06:09.488424  148021 command_runner.go:130] > # 	"SETPCAP",
	I1004 01:06:09.488432  148021 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 01:06:09.488441  148021 command_runner.go:130] > # 	"KILL",
	I1004 01:06:09.488447  148021 command_runner.go:130] > # ]
	I1004 01:06:09.488460  148021 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 01:06:09.488474  148021 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:06:09.488484  148021 command_runner.go:130] > # default_sysctls = [
	I1004 01:06:09.488518  148021 command_runner.go:130] > # ]
	I1004 01:06:09.488530  148021 command_runner.go:130] > # List of devices on the host that a
	I1004 01:06:09.488543  148021 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 01:06:09.488553  148021 command_runner.go:130] > # allowed_devices = [
	I1004 01:06:09.488557  148021 command_runner.go:130] > # 	"/dev/fuse",
	I1004 01:06:09.488561  148021 command_runner.go:130] > # ]
	I1004 01:06:09.488598  148021 command_runner.go:130] > # List of additional devices. specified as
	I1004 01:06:09.488611  148021 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 01:06:09.488619  148021 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 01:06:09.488644  148021 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:06:09.488654  148021 command_runner.go:130] > # additional_devices = [
	I1004 01:06:09.488664  148021 command_runner.go:130] > # ]
	I1004 01:06:09.488674  148021 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 01:06:09.488684  148021 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 01:06:09.488695  148021 command_runner.go:130] > # 	"/etc/cdi",
	I1004 01:06:09.488702  148021 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 01:06:09.488711  148021 command_runner.go:130] > # ]
	I1004 01:06:09.488720  148021 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 01:06:09.488729  148021 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 01:06:09.488734  148021 command_runner.go:130] > # Defaults to false.
	I1004 01:06:09.488741  148021 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 01:06:09.488748  148021 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 01:06:09.488756  148021 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 01:06:09.488761  148021 command_runner.go:130] > # hooks_dir = [
	I1004 01:06:09.488768  148021 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 01:06:09.488772  148021 command_runner.go:130] > # ]
	I1004 01:06:09.488778  148021 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 01:06:09.488786  148021 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 01:06:09.488791  148021 command_runner.go:130] > # its default mounts from the following two files:
	I1004 01:06:09.488797  148021 command_runner.go:130] > #
	I1004 01:06:09.488803  148021 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 01:06:09.488819  148021 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 01:06:09.488832  148021 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 01:06:09.488841  148021 command_runner.go:130] > #
	I1004 01:06:09.488852  148021 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 01:06:09.488866  148021 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 01:06:09.488879  148021 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 01:06:09.488891  148021 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 01:06:09.488899  148021 command_runner.go:130] > #
	I1004 01:06:09.488907  148021 command_runner.go:130] > # default_mounts_file = ""
	I1004 01:06:09.488919  148021 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 01:06:09.488933  148021 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 01:06:09.488944  148021 command_runner.go:130] > pids_limit = 1024
	I1004 01:06:09.488957  148021 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 01:06:09.488971  148021 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 01:06:09.488984  148021 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 01:06:09.488998  148021 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 01:06:09.489004  148021 command_runner.go:130] > # log_size_max = -1
	I1004 01:06:09.489011  148021 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1004 01:06:09.489017  148021 command_runner.go:130] > # log_to_journald = false
	I1004 01:06:09.489023  148021 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 01:06:09.489031  148021 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 01:06:09.489038  148021 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 01:06:09.489044  148021 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 01:06:09.489052  148021 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 01:06:09.489056  148021 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 01:06:09.489062  148021 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 01:06:09.489066  148021 command_runner.go:130] > # read_only = false
	I1004 01:06:09.489074  148021 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 01:06:09.489081  148021 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 01:06:09.489088  148021 command_runner.go:130] > # live configuration reload.
	I1004 01:06:09.489092  148021 command_runner.go:130] > # log_level = "info"
	I1004 01:06:09.489101  148021 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 01:06:09.489106  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:06:09.489112  148021 command_runner.go:130] > # log_filter = ""
	I1004 01:06:09.489118  148021 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 01:06:09.489125  148021 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 01:06:09.489131  148021 command_runner.go:130] > # separated by comma.
	I1004 01:06:09.489148  148021 command_runner.go:130] > # uid_mappings = ""
	I1004 01:06:09.489162  148021 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 01:06:09.489175  148021 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 01:06:09.489184  148021 command_runner.go:130] > # separated by comma.
	I1004 01:06:09.489190  148021 command_runner.go:130] > # gid_mappings = ""
	I1004 01:06:09.489204  148021 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 01:06:09.489218  148021 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:06:09.489232  148021 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:06:09.489243  148021 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 01:06:09.489256  148021 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 01:06:09.489270  148021 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:06:09.489284  148021 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:06:09.489292  148021 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 01:06:09.489305  148021 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 01:06:09.489337  148021 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 01:06:09.489345  148021 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 01:06:09.489350  148021 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 01:06:09.489356  148021 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 01:06:09.489362  148021 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 01:06:09.489369  148021 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 01:06:09.489375  148021 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 01:06:09.489381  148021 command_runner.go:130] > drop_infra_ctr = false
	I1004 01:06:09.489387  148021 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 01:06:09.489393  148021 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 01:06:09.489402  148021 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 01:06:09.489406  148021 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 01:06:09.489414  148021 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 01:06:09.489422  148021 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 01:06:09.489427  148021 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 01:06:09.489435  148021 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 01:06:09.489442  148021 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 01:06:09.489448  148021 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 01:06:09.489456  148021 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1004 01:06:09.489462  148021 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1004 01:06:09.489468  148021 command_runner.go:130] > # default_runtime = "runc"
	I1004 01:06:09.489473  148021 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 01:06:09.489483  148021 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 01:06:09.489500  148021 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1004 01:06:09.489513  148021 command_runner.go:130] > # creation as a file is not desired either.
	I1004 01:06:09.489529  148021 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 01:06:09.489537  148021 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 01:06:09.489542  148021 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 01:06:09.489548  148021 command_runner.go:130] > # ]
	I1004 01:06:09.489554  148021 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 01:06:09.489563  148021 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 01:06:09.489569  148021 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1004 01:06:09.489578  148021 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1004 01:06:09.489581  148021 command_runner.go:130] > #
	I1004 01:06:09.489586  148021 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1004 01:06:09.489593  148021 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1004 01:06:09.489597  148021 command_runner.go:130] > #  runtime_type = "oci"
	I1004 01:06:09.489605  148021 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1004 01:06:09.489609  148021 command_runner.go:130] > #  privileged_without_host_devices = false
	I1004 01:06:09.489616  148021 command_runner.go:130] > #  allowed_annotations = []
	I1004 01:06:09.489620  148021 command_runner.go:130] > # Where:
	I1004 01:06:09.489625  148021 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1004 01:06:09.489634  148021 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1004 01:06:09.489643  148021 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 01:06:09.489652  148021 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 01:06:09.489656  148021 command_runner.go:130] > #   in $PATH.
	I1004 01:06:09.489662  148021 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1004 01:06:09.489669  148021 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 01:06:09.489675  148021 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1004 01:06:09.489682  148021 command_runner.go:130] > #   state.
	I1004 01:06:09.489689  148021 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 01:06:09.489699  148021 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 01:06:09.489705  148021 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 01:06:09.489713  148021 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 01:06:09.489720  148021 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 01:06:09.489728  148021 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 01:06:09.489733  148021 command_runner.go:130] > #   The currently recognized values are:
	I1004 01:06:09.489743  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 01:06:09.489754  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 01:06:09.489766  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 01:06:09.489779  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 01:06:09.489792  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 01:06:09.489805  148021 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 01:06:09.489812  148021 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 01:06:09.489821  148021 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1004 01:06:09.489829  148021 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 01:06:09.489874  148021 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 01:06:09.489887  148021 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 01:06:09.489894  148021 command_runner.go:130] > runtime_type = "oci"
	I1004 01:06:09.489901  148021 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 01:06:09.489905  148021 command_runner.go:130] > runtime_config_path = ""
	I1004 01:06:09.489912  148021 command_runner.go:130] > monitor_path = ""
	I1004 01:06:09.489916  148021 command_runner.go:130] > monitor_cgroup = ""
	I1004 01:06:09.489920  148021 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 01:06:09.489929  148021 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1004 01:06:09.489933  148021 command_runner.go:130] > # running containers
	I1004 01:06:09.489940  148021 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1004 01:06:09.489947  148021 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1004 01:06:09.489990  148021 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1004 01:06:09.490002  148021 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1004 01:06:09.490007  148021 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1004 01:06:09.490015  148021 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1004 01:06:09.490019  148021 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1004 01:06:09.490026  148021 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1004 01:06:09.490031  148021 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1004 01:06:09.490039  148021 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1004 01:06:09.490046  148021 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 01:06:09.490053  148021 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 01:06:09.490060  148021 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 01:06:09.490069  148021 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 01:06:09.490078  148021 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 01:06:09.490085  148021 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 01:06:09.490096  148021 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 01:06:09.490105  148021 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 01:06:09.490114  148021 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 01:06:09.490123  148021 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 01:06:09.490127  148021 command_runner.go:130] > # Example:
	I1004 01:06:09.490132  148021 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 01:06:09.490144  148021 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 01:06:09.490149  148021 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 01:06:09.490157  148021 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 01:06:09.490161  148021 command_runner.go:130] > # cpuset = 0
	I1004 01:06:09.490168  148021 command_runner.go:130] > # cpushares = "0-1"
	I1004 01:06:09.490172  148021 command_runner.go:130] > # Where:
	I1004 01:06:09.490179  148021 command_runner.go:130] > # The workload name is workload-type.
	I1004 01:06:09.490186  148021 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 01:06:09.490194  148021 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 01:06:09.490200  148021 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 01:06:09.490209  148021 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 01:06:09.490218  148021 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 01:06:09.490221  148021 command_runner.go:130] > # 
	I1004 01:06:09.490231  148021 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 01:06:09.490234  148021 command_runner.go:130] > #
	I1004 01:06:09.490243  148021 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 01:06:09.490249  148021 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 01:06:09.490258  148021 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 01:06:09.490266  148021 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 01:06:09.490272  148021 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 01:06:09.490278  148021 command_runner.go:130] > [crio.image]
	I1004 01:06:09.490284  148021 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 01:06:09.490291  148021 command_runner.go:130] > # default_transport = "docker://"
	I1004 01:06:09.490297  148021 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 01:06:09.490305  148021 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:06:09.490310  148021 command_runner.go:130] > # global_auth_file = ""
	I1004 01:06:09.490317  148021 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 01:06:09.490322  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:06:09.490329  148021 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1004 01:06:09.490336  148021 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 01:06:09.490344  148021 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:06:09.490349  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:06:09.490355  148021 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 01:06:09.490361  148021 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 01:06:09.490366  148021 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 01:06:09.490372  148021 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 01:06:09.490377  148021 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 01:06:09.490381  148021 command_runner.go:130] > # pause_command = "/pause"
	I1004 01:06:09.490387  148021 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 01:06:09.490393  148021 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 01:06:09.490419  148021 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 01:06:09.490425  148021 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 01:06:09.490430  148021 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 01:06:09.490434  148021 command_runner.go:130] > # signature_policy = ""
	I1004 01:06:09.490439  148021 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 01:06:09.490445  148021 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 01:06:09.490449  148021 command_runner.go:130] > # changing them here.
	I1004 01:06:09.490453  148021 command_runner.go:130] > # insecure_registries = [
	I1004 01:06:09.490456  148021 command_runner.go:130] > # ]
	I1004 01:06:09.490462  148021 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 01:06:09.490467  148021 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 01:06:09.490471  148021 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 01:06:09.490476  148021 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 01:06:09.490480  148021 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 01:06:09.490485  148021 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 01:06:09.490489  148021 command_runner.go:130] > # CNI plugins.
	I1004 01:06:09.490492  148021 command_runner.go:130] > [crio.network]
	I1004 01:06:09.490498  148021 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 01:06:09.490506  148021 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 01:06:09.490510  148021 command_runner.go:130] > # cni_default_network = ""
	I1004 01:06:09.490516  148021 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 01:06:09.490520  148021 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 01:06:09.490525  148021 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 01:06:09.490529  148021 command_runner.go:130] > # plugin_dirs = [
	I1004 01:06:09.490533  148021 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 01:06:09.490536  148021 command_runner.go:130] > # ]
	I1004 01:06:09.490541  148021 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 01:06:09.490545  148021 command_runner.go:130] > [crio.metrics]
	I1004 01:06:09.490550  148021 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 01:06:09.490553  148021 command_runner.go:130] > enable_metrics = true
	I1004 01:06:09.490561  148021 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 01:06:09.490567  148021 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 01:06:09.490576  148021 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 01:06:09.490585  148021 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 01:06:09.490593  148021 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 01:06:09.490597  148021 command_runner.go:130] > # metrics_collectors = [
	I1004 01:06:09.490602  148021 command_runner.go:130] > # 	"operations",
	I1004 01:06:09.490607  148021 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 01:06:09.490613  148021 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 01:06:09.490617  148021 command_runner.go:130] > # 	"operations_errors",
	I1004 01:06:09.490624  148021 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 01:06:09.490628  148021 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 01:06:09.490637  148021 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 01:06:09.490641  148021 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 01:06:09.490645  148021 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 01:06:09.490653  148021 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 01:06:09.490657  148021 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 01:06:09.490661  148021 command_runner.go:130] > # 	"containers_oom_total",
	I1004 01:06:09.490665  148021 command_runner.go:130] > # 	"containers_oom",
	I1004 01:06:09.490672  148021 command_runner.go:130] > # 	"processes_defunct",
	I1004 01:06:09.490676  148021 command_runner.go:130] > # 	"operations_total",
	I1004 01:06:09.490681  148021 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 01:06:09.490685  148021 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 01:06:09.490692  148021 command_runner.go:130] > # 	"operations_errors_total",
	I1004 01:06:09.490697  148021 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 01:06:09.490704  148021 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 01:06:09.490709  148021 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 01:06:09.490716  148021 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 01:06:09.490720  148021 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 01:06:09.490727  148021 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 01:06:09.490730  148021 command_runner.go:130] > # ]
	I1004 01:06:09.490739  148021 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 01:06:09.490743  148021 command_runner.go:130] > # metrics_port = 9090
	I1004 01:06:09.490748  148021 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 01:06:09.490753  148021 command_runner.go:130] > # metrics_socket = ""
	I1004 01:06:09.490758  148021 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 01:06:09.490766  148021 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 01:06:09.490772  148021 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 01:06:09.490777  148021 command_runner.go:130] > # certificate on any modification event.
	I1004 01:06:09.490782  148021 command_runner.go:130] > # metrics_cert = ""
	I1004 01:06:09.490787  148021 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 01:06:09.490794  148021 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 01:06:09.490798  148021 command_runner.go:130] > # metrics_key = ""
	I1004 01:06:09.490804  148021 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 01:06:09.490808  148021 command_runner.go:130] > [crio.tracing]
	I1004 01:06:09.490816  148021 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 01:06:09.490822  148021 command_runner.go:130] > # enable_tracing = false
	I1004 01:06:09.490829  148021 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 01:06:09.490838  148021 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 01:06:09.490843  148021 command_runner.go:130] > # Number of samples to collect per million spans.
	I1004 01:06:09.490847  148021 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 01:06:09.490856  148021 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 01:06:09.490859  148021 command_runner.go:130] > [crio.stats]
	I1004 01:06:09.490867  148021 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 01:06:09.490872  148021 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 01:06:09.490879  148021 command_runner.go:130] > # stats_collection_period = 0
	I1004 01:06:09.491337  148021 command_runner.go:130] ! time="2023-10-04 01:06:09.459623930Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1004 01:06:09.491354  148021 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 01:06:09.491414  148021 cni.go:84] Creating CNI manager for ""
	I1004 01:06:09.491425  148021 cni.go:136] 1 nodes found, recommending kindnet
	I1004 01:06:09.491442  148021 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:06:09.491462  148021 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-038823 NodeName:multinode-038823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:06:09.491600  148021 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-038823"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:06:09.491670  148021 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-038823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:06:09.491718  148021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:06:09.501045  148021 command_runner.go:130] > kubeadm
	I1004 01:06:09.501068  148021 command_runner.go:130] > kubectl
	I1004 01:06:09.501074  148021 command_runner.go:130] > kubelet
	I1004 01:06:09.501122  148021 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:06:09.501176  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:06:09.510049  148021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1004 01:06:09.525593  148021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:06:09.541153  148021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1004 01:06:09.558416  148021 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1004 01:06:09.562330  148021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:06:09.574346  148021 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823 for IP: 192.168.39.212
	I1004 01:06:09.574384  148021 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:09.574546  148021 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:06:09.574587  148021 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:06:09.574639  148021 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key
	I1004 01:06:09.574655  148021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt with IP's: []
	I1004 01:06:10.139030  148021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt ...
	I1004 01:06:10.139079  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt: {Name:mkebcfc01d4f443f37c279fd99ab00a14f3416d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:10.139276  148021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key ...
	I1004 01:06:10.139291  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key: {Name:mkff30c64df07b632f9914ae6769768c4fa64d12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:10.139391  148021 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key.543da273
	I1004 01:06:10.139410  148021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt.543da273 with IP's: [192.168.39.212 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 01:06:10.252333  148021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt.543da273 ...
	I1004 01:06:10.252365  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt.543da273: {Name:mkf74c19676125d1bc96372dcd8ddeedf6ccfea2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:10.252536  148021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key.543da273 ...
	I1004 01:06:10.252551  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key.543da273: {Name:mk9cf6acf66992ced6b8cb6641e317895a359e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:10.252645  148021 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt.543da273 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt
	I1004 01:06:10.252729  148021 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key.543da273 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key
	I1004 01:06:10.252807  148021 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key
	I1004 01:06:10.252828  148021 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt with IP's: []
	I1004 01:06:10.632428  148021 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt ...
	I1004 01:06:10.632467  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt: {Name:mk6c81150c29427709cf12fd9266082bd0da3b14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:10.632658  148021 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key ...
	I1004 01:06:10.632674  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key: {Name:mkb658a245773c3bc871ab15917150d3c08f1b01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:10.632782  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 01:06:10.632811  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 01:06:10.632839  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 01:06:10.632856  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 01:06:10.632871  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 01:06:10.632888  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 01:06:10.632907  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 01:06:10.632926  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 01:06:10.632993  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:06:10.633046  148021 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:06:10.633062  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:06:10.633100  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:06:10.633131  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:06:10.633164  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:06:10.633222  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:06:10.633256  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem -> /usr/share/ca-certificates/135565.pem
	I1004 01:06:10.633276  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /usr/share/ca-certificates/1355652.pem
	I1004 01:06:10.633295  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:06:10.633868  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:06:10.660592  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:06:10.683839  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:06:10.707910  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 01:06:10.731189  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:06:10.753023  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:06:10.776083  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:06:10.798728  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:06:10.821444  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:06:10.844411  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:06:10.867657  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:06:10.890677  148021 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:06:10.907821  148021 ssh_runner.go:195] Run: openssl version
	I1004 01:06:10.913278  148021 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1004 01:06:10.913685  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:06:10.924575  148021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:06:10.929250  148021 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:06:10.929288  148021 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:06:10.929340  148021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:06:10.934889  148021 command_runner.go:130] > 51391683
	I1004 01:06:10.935058  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:06:10.946294  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:06:10.957375  148021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:06:10.962209  148021 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:06:10.962344  148021 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:06:10.962408  148021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:06:10.967881  148021 command_runner.go:130] > 3ec20f2e
	I1004 01:06:10.968007  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:06:10.978999  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:06:10.989965  148021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:06:10.994677  148021 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:06:10.994776  148021 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:06:10.994838  148021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:06:11.000508  148021 command_runner.go:130] > b5213941
	I1004 01:06:11.000575  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:06:11.011366  148021 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:06:11.015646  148021 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:06:11.015697  148021 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:06:11.015745  148021 kubeadm.go:404] StartCluster: {Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:06:11.015829  148021 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:06:11.015874  148021 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:06:11.053861  148021 cri.go:89] found id: ""
	I1004 01:06:11.053935  148021 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:06:11.063664  148021 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1004 01:06:11.063690  148021 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1004 01:06:11.063696  148021 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1004 01:06:11.063905  148021 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:06:11.073224  148021 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:06:11.082672  148021 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1004 01:06:11.082712  148021 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1004 01:06:11.082720  148021 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1004 01:06:11.082727  148021 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:06:11.082767  148021 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:06:11.082841  148021 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 01:06:11.424595  148021 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:06:11.424662  148021 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:06:24.055883  148021 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 01:06:24.055922  148021 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1004 01:06:24.055981  148021 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:06:24.055990  148021 command_runner.go:130] > [preflight] Running pre-flight checks
	I1004 01:06:24.056083  148021 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:06:24.056096  148021 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:06:24.056213  148021 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:06:24.056225  148021 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:06:24.056347  148021 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:06:24.056359  148021 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:06:24.056429  148021 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:06:24.058099  148021 out.go:204]   - Generating certificates and keys ...
	I1004 01:06:24.056492  148021 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:06:24.058208  148021 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:06:24.058224  148021 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1004 01:06:24.058323  148021 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:06:24.058356  148021 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1004 01:06:24.058432  148021 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 01:06:24.058442  148021 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 01:06:24.058511  148021 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 01:06:24.058521  148021 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1004 01:06:24.058704  148021 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 01:06:24.058729  148021 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1004 01:06:24.058796  148021 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 01:06:24.058809  148021 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1004 01:06:24.058886  148021 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 01:06:24.058895  148021 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1004 01:06:24.058990  148021 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-038823] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1004 01:06:24.058999  148021 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-038823] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1004 01:06:24.059039  148021 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 01:06:24.059047  148021 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1004 01:06:24.059241  148021 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-038823] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1004 01:06:24.059265  148021 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-038823] and IPs [192.168.39.212 127.0.0.1 ::1]
	I1004 01:06:24.059365  148021 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 01:06:24.059377  148021 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 01:06:24.059461  148021 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 01:06:24.059472  148021 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 01:06:24.059532  148021 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 01:06:24.059542  148021 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1004 01:06:24.059628  148021 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:06:24.059640  148021 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:06:24.059707  148021 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:06:24.059717  148021 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:06:24.059787  148021 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:06:24.059797  148021 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:06:24.059865  148021 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:06:24.059876  148021 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:06:24.059920  148021 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:06:24.059927  148021 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:06:24.059990  148021 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:06:24.059999  148021 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:06:24.060050  148021 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:06:24.061634  148021 out.go:204]   - Booting up control plane ...
	I1004 01:06:24.060147  148021 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:06:24.061741  148021 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:06:24.061750  148021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:06:24.061810  148021 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:06:24.061817  148021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:06:24.061905  148021 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:06:24.061915  148021 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:06:24.062066  148021 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:06:24.062079  148021 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:06:24.062198  148021 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:06:24.062268  148021 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:06:24.062344  148021 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1004 01:06:24.062355  148021 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 01:06:24.062564  148021 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:06:24.062579  148021 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:06:24.062674  148021 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.005518 seconds
	I1004 01:06:24.062687  148021 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.005518 seconds
	I1004 01:06:24.062839  148021 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:06:24.062851  148021 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:06:24.063030  148021 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:06:24.063048  148021 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:06:24.063114  148021 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:06:24.063123  148021 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:06:24.063345  148021 command_runner.go:130] > [mark-control-plane] Marking the node multinode-038823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 01:06:24.063359  148021 kubeadm.go:322] [mark-control-plane] Marking the node multinode-038823 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 01:06:24.063428  148021 command_runner.go:130] > [bootstrap-token] Using token: 6orhmh.ijrhtxolcg9cos8u
	I1004 01:06:24.063452  148021 kubeadm.go:322] [bootstrap-token] Using token: 6orhmh.ijrhtxolcg9cos8u
	I1004 01:06:24.065752  148021 out.go:204]   - Configuring RBAC rules ...
	I1004 01:06:24.065902  148021 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:06:24.065919  148021 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:06:24.066005  148021 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 01:06:24.066025  148021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 01:06:24.066184  148021 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:06:24.066203  148021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:06:24.066354  148021 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:06:24.066373  148021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:06:24.066529  148021 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:06:24.066540  148021 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:06:24.066687  148021 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:06:24.066698  148021 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:06:24.066821  148021 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 01:06:24.066836  148021 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 01:06:24.066899  148021 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1004 01:06:24.066909  148021 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:06:24.066970  148021 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1004 01:06:24.066980  148021 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:06:24.066990  148021 kubeadm.go:322] 
	I1004 01:06:24.067086  148021 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1004 01:06:24.067096  148021 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:06:24.067102  148021 kubeadm.go:322] 
	I1004 01:06:24.067250  148021 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1004 01:06:24.067269  148021 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:06:24.067273  148021 kubeadm.go:322] 
	I1004 01:06:24.067294  148021 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1004 01:06:24.067300  148021 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:06:24.067354  148021 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:06:24.067361  148021 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:06:24.067424  148021 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:06:24.067442  148021 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:06:24.067459  148021 kubeadm.go:322] 
	I1004 01:06:24.067623  148021 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1004 01:06:24.067639  148021 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 01:06:24.067645  148021 kubeadm.go:322] 
	I1004 01:06:24.067710  148021 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 01:06:24.067717  148021 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 01:06:24.067721  148021 kubeadm.go:322] 
	I1004 01:06:24.067790  148021 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1004 01:06:24.067807  148021 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:06:24.067894  148021 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:06:24.067906  148021 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:06:24.068061  148021 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:06:24.068073  148021 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:06:24.068079  148021 kubeadm.go:322] 
	I1004 01:06:24.068182  148021 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1004 01:06:24.068194  148021 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 01:06:24.068298  148021 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1004 01:06:24.068310  148021 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:06:24.068315  148021 kubeadm.go:322] 
	I1004 01:06:24.068435  148021 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 6orhmh.ijrhtxolcg9cos8u \
	I1004 01:06:24.068437  148021 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6orhmh.ijrhtxolcg9cos8u \
	I1004 01:06:24.068606  148021 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:06:24.068620  148021 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:06:24.068659  148021 command_runner.go:130] > 	--control-plane 
	I1004 01:06:24.068669  148021 kubeadm.go:322] 	--control-plane 
	I1004 01:06:24.068678  148021 kubeadm.go:322] 
	I1004 01:06:24.068776  148021 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:06:24.068785  148021 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:06:24.068793  148021 kubeadm.go:322] 
	I1004 01:06:24.068897  148021 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 6orhmh.ijrhtxolcg9cos8u \
	I1004 01:06:24.068925  148021 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6orhmh.ijrhtxolcg9cos8u \
	I1004 01:06:24.069099  148021 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:06:24.069117  148021 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:06:24.069131  148021 cni.go:84] Creating CNI manager for ""
	I1004 01:06:24.069152  148021 cni.go:136] 1 nodes found, recommending kindnet
	I1004 01:06:24.070948  148021 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 01:06:24.072465  148021 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 01:06:24.085335  148021 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1004 01:06:24.085357  148021 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1004 01:06:24.085368  148021 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1004 01:06:24.085374  148021 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:06:24.085380  148021 command_runner.go:130] > Access: 2023-10-04 01:05:51.007018562 +0000
	I1004 01:06:24.085385  148021 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1004 01:06:24.085390  148021 command_runner.go:130] > Change: 2023-10-04 01:05:49.121018562 +0000
	I1004 01:06:24.085394  148021 command_runner.go:130] >  Birth: -
	I1004 01:06:24.086388  148021 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 01:06:24.086404  148021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1004 01:06:24.124720  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 01:06:25.118775  148021 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1004 01:06:25.131014  148021 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1004 01:06:25.149718  148021 command_runner.go:130] > serviceaccount/kindnet created
	I1004 01:06:25.164240  148021 command_runner.go:130] > daemonset.apps/kindnet created
	I1004 01:06:25.166892  148021 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.042137808s)
	I1004 01:06:25.166930  148021 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:06:25.167036  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:25.167063  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=multinode-038823 minikube.k8s.io/updated_at=2023_10_04T01_06_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:25.189989  148021 command_runner.go:130] > -16
	I1004 01:06:25.190090  148021 ops.go:34] apiserver oom_adj: -16
	I1004 01:06:25.387549  148021 command_runner.go:130] > node/multinode-038823 labeled
	I1004 01:06:25.393811  148021 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1004 01:06:25.393957  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:25.490669  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:25.490798  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:25.580021  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:26.080914  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:26.165832  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:26.580949  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:26.673224  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:27.080867  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:27.167090  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:27.580628  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:27.664784  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:28.080932  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:28.165335  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:28.580926  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:28.668178  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:29.080751  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:29.178880  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:29.580945  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:29.679536  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:30.081170  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:30.166144  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:30.580360  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:30.670305  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:31.080298  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:31.168210  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:31.580821  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:31.670865  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:32.080437  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:32.177708  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:32.580944  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:32.689402  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:33.080470  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:33.166744  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:33.581138  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:33.675937  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:34.081027  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:34.180911  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:34.580394  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:34.700961  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:35.080435  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:35.179972  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:35.580551  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:35.702477  148021 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1004 01:06:36.080456  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:06:36.185969  148021 command_runner.go:130] > NAME      SECRETS   AGE
	I1004 01:06:36.185998  148021 command_runner.go:130] > default   0         0s
	I1004 01:06:36.186303  148021 kubeadm.go:1081] duration metric: took 11.019325443s to wait for elevateKubeSystemPrivileges.
	I1004 01:06:36.186340  148021 kubeadm.go:406] StartCluster complete in 25.170597644s
	I1004 01:06:36.186364  148021 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:36.186455  148021 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:06:36.187176  148021 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:06:36.187403  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:06:36.187523  148021 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:06:36.187606  148021 addons.go:69] Setting storage-provisioner=true in profile "multinode-038823"
	I1004 01:06:36.187629  148021 addons.go:231] Setting addon storage-provisioner=true in "multinode-038823"
	I1004 01:06:36.187656  148021 addons.go:69] Setting default-storageclass=true in profile "multinode-038823"
	I1004 01:06:36.187687  148021 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-038823"
	I1004 01:06:36.187691  148021 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:06:36.187741  148021 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:06:36.187664  148021 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:06:36.188041  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:06:36.188047  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:06:36.188063  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:06:36.188059  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:06:36.188036  148021 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:06:36.189127  148021 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:06:36.189147  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:36.189159  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:36.189167  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:36.189358  148021 cert_rotation.go:137] Starting client certificate rotation controller
	I1004 01:06:36.202766  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I1004 01:06:36.203213  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:06:36.203764  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:06:36.203791  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:06:36.204201  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:06:36.204384  148021 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:06:36.205486  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1004 01:06:36.205929  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:06:36.206462  148021 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:06:36.206710  148021 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:06:36.206955  148021 addons.go:231] Setting addon default-storageclass=true in "multinode-038823"
	I1004 01:06:36.206985  148021 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:06:36.207251  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:06:36.207269  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:06:36.207570  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:06:36.207585  148021 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I1004 01:06:36.207598  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:36.207598  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:06:36.207608  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:36 GMT
	I1004 01:06:36.207618  148021 round_trippers.go:580]     Audit-Id: 1b2082a4-7cc2-4378-9b78-728742bf1633
	I1004 01:06:36.207632  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:36.207648  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:36.207654  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:36.207660  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:36.207666  148021 round_trippers.go:580]     Content-Length: 291
	I1004 01:06:36.207699  148021 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"263","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1004 01:06:36.208075  148021 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"263","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1004 01:06:36.208140  148021 round_trippers.go:463] PUT https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:06:36.208153  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:36.208164  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:36.208173  148021 round_trippers.go:473]     Content-Type: application/json
	I1004 01:06:36.208179  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:36.208078  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:06:36.208625  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:06:36.208657  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:06:36.222273  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I1004 01:06:36.222830  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:06:36.222955  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I1004 01:06:36.223398  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:06:36.223416  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:06:36.223466  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:06:36.223828  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:06:36.224038  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:06:36.224065  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:06:36.224458  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:06:36.224470  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:06:36.224503  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:06:36.224656  148021 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:06:36.226704  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:36.228537  148021 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:06:36.230012  148021 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:06:36.230032  148021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:06:36.230055  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:36.230281  148021 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1004 01:06:36.230297  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:36.230306  148021 round_trippers.go:580]     Audit-Id: 81068ac6-bed0-4cc7-bab0-bd2d5ced9416
	I1004 01:06:36.230320  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:36.230329  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:36.230337  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:36.230346  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:36.230355  148021 round_trippers.go:580]     Content-Length: 291
	I1004 01:06:36.230363  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:36 GMT
	I1004 01:06:36.230394  148021 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"340","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1004 01:06:36.230564  148021 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:06:36.230572  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:36.230583  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:36.230592  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:36.233930  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:36.234402  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:36.234450  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:36.234714  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:36.234927  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:36.235035  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:06:36.235043  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:36.235049  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:36.235055  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:36.235063  148021 round_trippers.go:580]     Content-Length: 291
	I1004 01:06:36.235074  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:36 GMT
	I1004 01:06:36.235081  148021 round_trippers.go:580]     Audit-Id: 46b5b077-3f8a-471f-913d-d7baa931ceea
	I1004 01:06:36.235090  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:36.235098  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:36.235159  148021 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"340","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1004 01:06:36.235158  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:36.235280  148021 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-038823" context rescaled to 1 replicas
	I1004 01:06:36.235315  148021 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:06:36.237022  148021 out.go:177] * Verifying Kubernetes components...
	I1004 01:06:36.235481  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:06:36.238713  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:06:36.242109  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I1004 01:06:36.242552  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:06:36.243116  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:06:36.243150  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:06:36.243542  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:06:36.243788  148021 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:06:36.245440  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:06:36.245733  148021 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:06:36.245751  148021 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:06:36.245765  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:06:36.248920  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:36.249452  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:06:36.249483  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:06:36.249681  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:06:36.249915  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:06:36.250086  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:06:36.250228  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:06:36.375000  148021 command_runner.go:130] > apiVersion: v1
	I1004 01:06:36.375022  148021 command_runner.go:130] > data:
	I1004 01:06:36.375029  148021 command_runner.go:130] >   Corefile: |
	I1004 01:06:36.375035  148021 command_runner.go:130] >     .:53 {
	I1004 01:06:36.375041  148021 command_runner.go:130] >         errors
	I1004 01:06:36.375048  148021 command_runner.go:130] >         health {
	I1004 01:06:36.375055  148021 command_runner.go:130] >            lameduck 5s
	I1004 01:06:36.375068  148021 command_runner.go:130] >         }
	I1004 01:06:36.375073  148021 command_runner.go:130] >         ready
	I1004 01:06:36.375082  148021 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1004 01:06:36.375088  148021 command_runner.go:130] >            pods insecure
	I1004 01:06:36.375096  148021 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1004 01:06:36.375104  148021 command_runner.go:130] >            ttl 30
	I1004 01:06:36.375114  148021 command_runner.go:130] >         }
	I1004 01:06:36.375122  148021 command_runner.go:130] >         prometheus :9153
	I1004 01:06:36.375131  148021 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1004 01:06:36.375155  148021 command_runner.go:130] >            max_concurrent 1000
	I1004 01:06:36.375166  148021 command_runner.go:130] >         }
	I1004 01:06:36.375173  148021 command_runner.go:130] >         cache 30
	I1004 01:06:36.375180  148021 command_runner.go:130] >         loop
	I1004 01:06:36.375189  148021 command_runner.go:130] >         reload
	I1004 01:06:36.375196  148021 command_runner.go:130] >         loadbalance
	I1004 01:06:36.375203  148021 command_runner.go:130] >     }
	I1004 01:06:36.375211  148021 command_runner.go:130] > kind: ConfigMap
	I1004 01:06:36.375218  148021 command_runner.go:130] > metadata:
	I1004 01:06:36.375229  148021 command_runner.go:130] >   creationTimestamp: "2023-10-04T01:06:23Z"
	I1004 01:06:36.375238  148021 command_runner.go:130] >   name: coredns
	I1004 01:06:36.375247  148021 command_runner.go:130] >   namespace: kube-system
	I1004 01:06:36.375255  148021 command_runner.go:130] >   resourceVersion: "259"
	I1004 01:06:36.375268  148021 command_runner.go:130] >   uid: 868b8069-9cac-4d4d-8503-3a3cef90175c
	I1004 01:06:36.377214  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:06:36.377479  148021 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:06:36.377699  148021 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:06:36.378046  148021 node_ready.go:35] waiting up to 6m0s for node "multinode-038823" to be "Ready" ...
	I1004 01:06:36.378147  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:36.378157  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:36.378169  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:36.378180  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:36.380438  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:36.380454  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:36.380462  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:36.380470  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:36.380479  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:36.380490  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:36.380504  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:36 GMT
	I1004 01:06:36.380513  148021 round_trippers.go:580]     Audit-Id: 43024ee6-8bef-4161-bcb1-57c7619db2be
	I1004 01:06:36.380692  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:36.381210  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:36.381225  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:36.381235  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:36.381244  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:36.384064  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:36.384079  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:36.384088  148021 round_trippers.go:580]     Audit-Id: 3970398f-4371-4c2e-a371-97dd135b681f
	I1004 01:06:36.384112  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:36.384127  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:36.384136  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:36.384143  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:36.384149  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:36 GMT
	I1004 01:06:36.384269  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:36.412775  148021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:06:36.440230  148021 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:06:36.884755  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:36.884786  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:36.884803  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:36.884812  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:37.091243  148021 round_trippers.go:574] Response Status: 200 OK in 206 milliseconds
	I1004 01:06:37.091280  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:37.091290  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:37.091298  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:37.091306  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:37.091315  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:37 GMT
	I1004 01:06:37.091322  148021 round_trippers.go:580]     Audit-Id: c1e961d0-2a3b-4525-b87d-1e8ae4b4e329
	I1004 01:06:37.091334  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:37.091488  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:37.209329  148021 command_runner.go:130] > configmap/coredns replaced
	I1004 01:06:37.212229  148021 start.go:923] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1004 01:06:37.365656  148021 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1004 01:06:37.384928  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:37.384953  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:37.384964  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:37.384971  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:37.385546  148021 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1004 01:06:37.389566  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:06:37.389592  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:37.389602  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:37.389611  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:37 GMT
	I1004 01:06:37.389619  148021 round_trippers.go:580]     Audit-Id: 71299dc3-08c1-4535-9ffe-c119bfb52f9f
	I1004 01:06:37.389628  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:37.389641  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:37.389650  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:37.390092  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:37.411379  148021 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1004 01:06:37.447707  148021 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1004 01:06:37.463944  148021 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1004 01:06:37.479760  148021 command_runner.go:130] > pod/storage-provisioner created
	I1004 01:06:37.482176  148021 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1004 01:06:37.482178  148021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.06936824s)
	I1004 01:06:37.482217  148021 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.041961078s)
	I1004 01:06:37.482248  148021 main.go:141] libmachine: Making call to close driver server
	I1004 01:06:37.482261  148021 main.go:141] libmachine: (multinode-038823) Calling .Close
	I1004 01:06:37.482322  148021 main.go:141] libmachine: Making call to close driver server
	I1004 01:06:37.482353  148021 main.go:141] libmachine: (multinode-038823) Calling .Close
	I1004 01:06:37.482655  148021 main.go:141] libmachine: (multinode-038823) DBG | Closing plugin on server side
	I1004 01:06:37.482675  148021 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:06:37.482691  148021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:06:37.482703  148021 main.go:141] libmachine: Making call to close driver server
	I1004 01:06:37.482705  148021 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:06:37.482712  148021 main.go:141] libmachine: (multinode-038823) Calling .Close
	I1004 01:06:37.482716  148021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:06:37.482743  148021 main.go:141] libmachine: Making call to close driver server
	I1004 01:06:37.482912  148021 main.go:141] libmachine: (multinode-038823) Calling .Close
	I1004 01:06:37.482922  148021 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:06:37.482935  148021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:06:37.482675  148021 main.go:141] libmachine: (multinode-038823) DBG | Closing plugin on server side
	I1004 01:06:37.483305  148021 main.go:141] libmachine: (multinode-038823) DBG | Closing plugin on server side
	I1004 01:06:37.483359  148021 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:06:37.483385  148021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:06:37.483500  148021 round_trippers.go:463] GET https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses
	I1004 01:06:37.483508  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:37.483519  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:37.483529  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:37.488268  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:06:37.488289  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:37.488296  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:37.488301  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:37.488307  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:37.488312  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:37.488317  148021 round_trippers.go:580]     Content-Length: 1273
	I1004 01:06:37.488322  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:37 GMT
	I1004 01:06:37.488329  148021 round_trippers.go:580]     Audit-Id: 94fecfcd-a1d0-4526-aec5-d7c47fcbfe39
	I1004 01:06:37.488395  148021 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"standard","uid":"0d5f739e-4516-4cb6-8562-43239159cca4","resourceVersion":"388","creationTimestamp":"2023-10-04T01:06:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-04T01:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1004 01:06:37.488966  148021 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0d5f739e-4516-4cb6-8562-43239159cca4","resourceVersion":"388","creationTimestamp":"2023-10-04T01:06:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-04T01:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1004 01:06:37.489042  148021 round_trippers.go:463] PUT https://192.168.39.212:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1004 01:06:37.489054  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:37.489066  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:37.489079  148021 round_trippers.go:473]     Content-Type: application/json
	I1004 01:06:37.489092  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:37.504229  148021 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1004 01:06:37.504255  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:37.504267  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:37.504276  148021 round_trippers.go:580]     Content-Length: 1220
	I1004 01:06:37.504285  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:37 GMT
	I1004 01:06:37.504291  148021 round_trippers.go:580]     Audit-Id: e0dd7f7c-d846-4d6f-8e78-b70c98dc7e75
	I1004 01:06:37.504301  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:37.504306  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:37.504311  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:37.504345  148021 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"0d5f739e-4516-4cb6-8562-43239159cca4","resourceVersion":"388","creationTimestamp":"2023-10-04T01:06:37Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-04T01:06:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1004 01:06:37.504484  148021 main.go:141] libmachine: Making call to close driver server
	I1004 01:06:37.504497  148021 main.go:141] libmachine: (multinode-038823) Calling .Close
	I1004 01:06:37.504828  148021 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:06:37.504848  148021 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:06:37.504889  148021 main.go:141] libmachine: (multinode-038823) DBG | Closing plugin on server side
	I1004 01:06:37.507721  148021 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1004 01:06:37.509322  148021 addons.go:502] enable addons completed in 1.321799517s: enabled=[storage-provisioner default-storageclass]
	I1004 01:06:37.884819  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:37.884848  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:37.884860  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:37.884869  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:37.888115  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:37.888144  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:37.888154  148021 round_trippers.go:580]     Audit-Id: f067df2b-0376-47df-9699-4d5442785999
	I1004 01:06:37.888160  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:37.888165  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:37.888170  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:37.888191  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:37.888196  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:37 GMT
	I1004 01:06:37.890534  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:38.384889  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:38.384914  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:38.384923  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:38.384928  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:38.387729  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:38.387750  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:38.387757  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:38.387765  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:38.387774  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:38.387783  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:38.387792  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:38 GMT
	I1004 01:06:38.387801  148021 round_trippers.go:580]     Audit-Id: 276bd950-0839-49dd-a2a0-f6f8724996fa
	I1004 01:06:38.387990  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:38.388296  148021 node_ready.go:58] node "multinode-038823" has status "Ready":"False"
	I1004 01:06:38.885721  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:38.885744  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:38.885756  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:38.885762  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:38.888338  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:38.888366  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:38.888377  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:38.888386  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:38.888400  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:38 GMT
	I1004 01:06:38.888423  148021 round_trippers.go:580]     Audit-Id: d8ed06ac-65a8-4beb-bfce-f788e86c0490
	I1004 01:06:38.888428  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:38.888437  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:38.888871  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:39.385665  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:39.385704  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:39.385715  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:39.385723  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:39.389356  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:39.389374  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:39.389382  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:39.389387  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:39.389392  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:39.389397  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:39 GMT
	I1004 01:06:39.389405  148021 round_trippers.go:580]     Audit-Id: fe91561c-fe7d-4259-a07a-ac996d6e0ce0
	I1004 01:06:39.389413  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:39.389767  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:39.885450  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:39.885472  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:39.885480  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:39.885487  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:39.888410  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:39.888426  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:39.888432  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:39 GMT
	I1004 01:06:39.888440  148021 round_trippers.go:580]     Audit-Id: 827a0c0e-1cab-4a53-82d6-8ea7aba5d850
	I1004 01:06:39.888445  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:39.888450  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:39.888457  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:39.888465  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:39.889092  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:40.384805  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:40.384829  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:40.384837  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:40.384843  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:40.392102  148021 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 01:06:40.392124  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:40.392131  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:40.392137  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:40.392142  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:40.392147  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:40.392152  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:40 GMT
	I1004 01:06:40.392157  148021 round_trippers.go:580]     Audit-Id: 82c317a9-5f7a-483b-ae18-e3c3f372adbc
	I1004 01:06:40.392395  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:40.392704  148021 node_ready.go:58] node "multinode-038823" has status "Ready":"False"
	I1004 01:06:40.885028  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:40.885050  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:40.885059  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:40.885065  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:40.887824  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:40.887849  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:40.887860  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:40.887869  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:40 GMT
	I1004 01:06:40.887877  148021 round_trippers.go:580]     Audit-Id: 7bfd2589-9ae8-42d8-af8e-18ed3129184f
	I1004 01:06:40.887886  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:40.887894  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:40.887901  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:40.888063  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:41.385069  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:41.385097  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:41.385108  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:41.385117  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:41.388146  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:41.388175  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:41.388183  148021 round_trippers.go:580]     Audit-Id: c0ba7a5f-cefe-48ce-8168-fe3efd3bf3a6
	I1004 01:06:41.388189  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:41.388194  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:41.388199  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:41.388204  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:41.388209  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:41 GMT
	I1004 01:06:41.388730  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:41.885431  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:41.885458  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:41.885466  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:41.885472  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:41.888654  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:41.888684  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:41.888695  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:41.888703  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:41.888713  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:41.888721  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:41 GMT
	I1004 01:06:41.888730  148021 round_trippers.go:580]     Audit-Id: 4f30b43e-c0a4-4522-8b9d-f485fcf1f3f5
	I1004 01:06:41.888740  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:41.888926  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"350","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I1004 01:06:42.385643  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:42.385669  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.385678  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.385684  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.389533  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:42.389552  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.389559  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.389568  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.389577  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.389587  148021 round_trippers.go:580]     Audit-Id: 645553b0-9b58-4433-aa95-c4a358c8525c
	I1004 01:06:42.389595  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.389604  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.389773  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:42.390110  148021 node_ready.go:49] node "multinode-038823" has status "Ready":"True"
	I1004 01:06:42.390125  148021 node_ready.go:38] duration metric: took 6.012060554s waiting for node "multinode-038823" to be "Ready" ...
	I1004 01:06:42.390145  148021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:06:42.390225  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:06:42.390238  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.390245  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.390251  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.394055  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:42.394077  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.394086  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.394092  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.394098  148021 round_trippers.go:580]     Audit-Id: 372a78c0-8e2e-49b8-9c8f-9274abed5483
	I1004 01:06:42.394106  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.394111  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.394116  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.395033  148021 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"422","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53918 chars]
	I1004 01:06:42.397977  148021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:42.398049  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:06:42.398058  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.398066  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.398072  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.402435  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:06:42.402456  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.402466  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.402474  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.402483  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.402492  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.402502  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.402512  148021 round_trippers.go:580]     Audit-Id: 38f8d0d4-e757-4381-a428-fff550986ec2
	I1004 01:06:42.402692  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"422","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1004 01:06:42.403193  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:42.403210  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.403217  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.403223  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.406665  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:42.406696  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.406706  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.406715  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.406724  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.406731  148021 round_trippers.go:580]     Audit-Id: a4fef8a5-8a57-4a7a-8aff-31bab938097b
	I1004 01:06:42.406744  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.406753  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.406982  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:42.407404  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:06:42.407422  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.407429  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.407435  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.410914  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:42.410930  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.410936  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.410942  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.410949  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.410961  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.410974  148021 round_trippers.go:580]     Audit-Id: b8b572cc-5962-422a-8683-b6f124147b3f
	I1004 01:06:42.410982  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.411338  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"422","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1004 01:06:42.411747  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:42.411759  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.411766  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.411774  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.413961  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:42.413975  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.413982  148021 round_trippers.go:580]     Audit-Id: 4b45aa80-d7cc-466c-b3cc-b169512923f9
	I1004 01:06:42.413987  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.413993  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.414001  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.414011  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.414023  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.414722  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:42.915956  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:06:42.915981  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.915989  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.915995  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.918778  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:42.918804  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.918815  148021 round_trippers.go:580]     Audit-Id: 9002d871-1a4d-4cd2-bbbb-700bf84f8659
	I1004 01:06:42.918823  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.918833  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.918841  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.918849  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.918858  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.918989  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"422","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1004 01:06:42.919598  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:42.919620  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:42.919631  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:42.919641  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:42.924601  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:06:42.924618  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:42.924624  148021 round_trippers.go:580]     Audit-Id: 61b1d8e4-fbe6-44b1-a556-376fddf5aa95
	I1004 01:06:42.924629  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:42.924636  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:42.924644  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:42.924653  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:42.924660  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:42 GMT
	I1004 01:06:42.924895  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:43.415540  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:06:43.415567  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:43.415576  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:43.415582  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:43.418705  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:43.418728  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:43.418735  148021 round_trippers.go:580]     Audit-Id: d99b70c1-0191-4792-9045-c2be0dfbda89
	I1004 01:06:43.418742  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:43.418750  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:43.418776  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:43.418789  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:43.418794  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:43 GMT
	I1004 01:06:43.419011  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"422","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1004 01:06:43.419555  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:43.419572  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:43.419579  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:43.419585  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:43.421912  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:43.421931  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:43.421937  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:43.421944  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:43.421952  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:43.421960  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:43.421969  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:43 GMT
	I1004 01:06:43.421981  148021 round_trippers.go:580]     Audit-Id: 0094815c-7fdb-4d49-bac9-b90b7a49b33a
	I1004 01:06:43.422182  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:43.915812  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:06:43.915837  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:43.915846  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:43.915852  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:43.919975  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:06:43.920002  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:43.920013  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:43 GMT
	I1004 01:06:43.920022  148021 round_trippers.go:580]     Audit-Id: cc4693de-6e05-4563-b0f4-f3bfd5b91ef9
	I1004 01:06:43.920042  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:43.920050  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:43.920058  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:43.920066  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:43.921032  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"422","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I1004 01:06:43.921497  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:43.921511  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:43.921518  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:43.921537  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:43.923811  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:43.923831  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:43.923840  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:43.923849  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:43.923860  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:43.923866  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:43 GMT
	I1004 01:06:43.923872  148021 round_trippers.go:580]     Audit-Id: 395018a8-57a6-4f0c-8672-a8e64a9c4c73
	I1004 01:06:43.923881  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:43.924004  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:44.415688  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:06:44.415713  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.415724  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.415733  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.418678  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:44.418698  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.418705  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.418710  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.418716  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.418723  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.418731  148021 round_trippers.go:580]     Audit-Id: 6339ea6e-7e98-455f-baca-4bb525badd48
	I1004 01:06:44.418742  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.418926  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"438","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1004 01:06:44.419377  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:44.419392  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.419403  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.419410  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.421410  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:06:44.421423  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.421429  148021 round_trippers.go:580]     Audit-Id: ef67f279-de5b-49a4-9e4d-6d7b837a68f0
	I1004 01:06:44.421434  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.421439  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.421446  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.421454  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.421463  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.421624  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:44.422044  148021 pod_ready.go:92] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"True"
	I1004 01:06:44.422064  148021 pod_ready.go:81] duration metric: took 2.02406435s waiting for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.422077  148021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.422145  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-038823
	I1004 01:06:44.422155  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.422166  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.422178  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.425529  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:44.425545  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.425554  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.425561  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.425573  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.425580  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.425588  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.425594  148021 round_trippers.go:580]     Audit-Id: 644aabb2-c98c-4cf3-9a67-50c80929d514
	I1004 01:06:44.425715  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"324","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1004 01:06:44.426083  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:44.426098  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.426108  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.426117  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.427984  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:06:44.427997  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.428006  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.428015  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.428023  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.428036  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.428048  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.428061  148021 round_trippers.go:580]     Audit-Id: b4bdd4f8-4a43-41b4-a074-1c9c67931932
	I1004 01:06:44.428421  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:44.428788  148021 pod_ready.go:92] pod "etcd-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:06:44.428807  148021 pod_ready.go:81] duration metric: took 6.718116ms waiting for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.428822  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.428877  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-038823
	I1004 01:06:44.428891  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.428902  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.428911  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.430881  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:06:44.430897  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.430905  148021 round_trippers.go:580]     Audit-Id: 9718628e-2480-48fe-a910-99b8ce447b32
	I1004 01:06:44.430913  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.430920  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.430928  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.430936  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.430948  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.431271  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-038823","namespace":"kube-system","uid":"8f46d14f-fac3-4029-af40-ad242d6e93e1","resourceVersion":"323","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.mirror":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.seen":"2023-10-04T01:06:24.071714521Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1004 01:06:44.431724  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:44.431740  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.431751  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.431764  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.433564  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:06:44.433578  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.433587  148021 round_trippers.go:580]     Audit-Id: b54288ba-7cd8-4c84-80c9-f42cf760b46a
	I1004 01:06:44.433594  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.433603  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.433613  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.433625  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.433635  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.433762  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:44.434053  148021 pod_ready.go:92] pod "kube-apiserver-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:06:44.434068  148021 pod_ready.go:81] duration metric: took 5.238918ms waiting for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.434077  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.434128  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-038823
	I1004 01:06:44.434136  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.434143  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.434152  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.436197  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:44.436211  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.436220  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.436227  148021 round_trippers.go:580]     Audit-Id: 166c4c38-2d01-4d88-93f9-f5f2c9c060cd
	I1004 01:06:44.436237  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.436247  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.436260  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.436272  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.436443  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-038823","namespace":"kube-system","uid":"ace8ff54-191a-4969-bc58-ad0440f25084","resourceVersion":"298","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.mirror":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.seen":"2023-10-04T01:06:24.071715949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1004 01:06:44.436814  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:44.436826  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.436833  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.436841  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.438648  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:06:44.438663  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.438671  148021 round_trippers.go:580]     Audit-Id: 46e722a4-a794-491b-bda9-2145a5560000
	I1004 01:06:44.438679  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.438688  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.438697  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.438713  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.438722  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.438874  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:44.439145  148021 pod_ready.go:92] pod "kube-controller-manager-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:06:44.439161  148021 pod_ready.go:81] duration metric: took 5.075669ms waiting for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.439175  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.586590  148021 request.go:629] Waited for 147.338827ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:06:44.586677  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:06:44.586689  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.586697  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.586703  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.589436  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:44.589456  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.589465  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.589473  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.589481  148021 round_trippers.go:580]     Audit-Id: 4bf07083-4066-4337-bdb2-18aeac8e7414
	I1004 01:06:44.589490  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.589499  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.589509  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.589734  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pz9j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"36f00e2f-5611-43ae-94b5-d9dde6784128","resourceVersion":"408","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1004 01:06:44.786765  148021 request.go:629] Waited for 196.41351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:44.786841  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:44.786850  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.786864  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.786964  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.790027  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:44.790047  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.790057  148021 round_trippers.go:580]     Audit-Id: c887a5b9-733b-44b9-8375-703c8eca346d
	I1004 01:06:44.790066  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.790075  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.790096  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.790112  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.790120  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.790366  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:44.790672  148021 pod_ready.go:92] pod "kube-proxy-pz9j4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:06:44.790689  148021 pod_ready.go:81] duration metric: took 351.50597ms waiting for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.790702  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:44.985905  148021 request.go:629] Waited for 195.085043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:06:44.985971  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:06:44.985979  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:44.985987  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:44.985996  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:44.988753  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:44.988770  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:44.988777  148021 round_trippers.go:580]     Audit-Id: 50cb5e78-f65b-4d42-a7ce-c28fd0a4cc76
	I1004 01:06:44.988783  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:44.988787  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:44.988793  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:44.988798  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:44.988803  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:44 GMT
	I1004 01:06:44.989065  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-038823","namespace":"kube-system","uid":"2da95c67-ae74-41db-a746-455fa043f9a7","resourceVersion":"301","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.mirror":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.seen":"2023-10-04T01:06:24.071717021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1004 01:06:45.185861  148021 request.go:629] Waited for 196.279618ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:45.185942  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:06:45.185954  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:45.185966  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:45.185980  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:45.188929  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:45.188954  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:45.188965  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:45.188973  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:45.188981  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:45.188993  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:45 GMT
	I1004 01:06:45.189013  148021 round_trippers.go:580]     Audit-Id: 043d26f0-7739-4a5c-86cb-83b7aea97a8f
	I1004 01:06:45.189024  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:45.189204  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:06:45.189524  148021 pod_ready.go:92] pod "kube-scheduler-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:06:45.189542  148021 pod_ready.go:81] duration metric: took 398.831415ms waiting for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:06:45.189556  148021 pod_ready.go:38] duration metric: took 2.799384461s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:06:45.189592  148021 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:06:45.189648  148021 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:06:45.211592  148021 command_runner.go:130] > 1097
	I1004 01:06:45.211641  148021 api_server.go:72] duration metric: took 8.976291054s to wait for apiserver process to appear ...
	I1004 01:06:45.211653  148021 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:06:45.211675  148021 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:06:45.217779  148021 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1004 01:06:45.217886  148021 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1004 01:06:45.217896  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:45.217912  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:45.217920  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:45.218963  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:06:45.218981  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:45.218990  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:45 GMT
	I1004 01:06:45.218998  148021 round_trippers.go:580]     Audit-Id: 9d83df1b-2156-4b82-aee7-676345b26a66
	I1004 01:06:45.219006  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:45.219015  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:45.219024  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:45.219037  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:45.219048  148021 round_trippers.go:580]     Content-Length: 263
	I1004 01:06:45.219074  148021 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1004 01:06:45.219169  148021 api_server.go:141] control plane version: v1.28.2
	I1004 01:06:45.219189  148021 api_server.go:131] duration metric: took 7.527326ms to wait for apiserver health ...
	I1004 01:06:45.219200  148021 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:06:45.386606  148021 request.go:629] Waited for 167.316948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:06:45.386669  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:06:45.386676  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:45.386684  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:45.386697  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:45.390280  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:45.390302  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:45.390309  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:45.390318  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:45.390326  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:45 GMT
	I1004 01:06:45.390336  148021 round_trippers.go:580]     Audit-Id: f3fdf9c3-2ad1-402a-b31e-7d4073c7ca06
	I1004 01:06:45.390345  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:45.390353  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:45.391574  148021 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"438","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1004 01:06:45.393230  148021 system_pods.go:59] 8 kube-system pods found
	I1004 01:06:45.393250  148021 system_pods.go:61] "coredns-5dd5756b68-xbln6" [956d98ac-25cb-4d19-a9c7-c3a9682eff67] Running
	I1004 01:06:45.393255  148021 system_pods.go:61] "etcd-multinode-038823" [040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13] Running
	I1004 01:06:45.393259  148021 system_pods.go:61] "kindnet-prsst" [1775280f-c3e2-4162-9287-9b58a90c8f83] Running
	I1004 01:06:45.393263  148021 system_pods.go:61] "kube-apiserver-multinode-038823" [8f46d14f-fac3-4029-af40-ad242d6e93e1] Running
	I1004 01:06:45.393269  148021 system_pods.go:61] "kube-controller-manager-multinode-038823" [ace8ff54-191a-4969-bc58-ad0440f25084] Running
	I1004 01:06:45.393275  148021 system_pods.go:61] "kube-proxy-pz9j4" [36f00e2f-5611-43ae-94b5-d9dde6784128] Running
	I1004 01:06:45.393279  148021 system_pods.go:61] "kube-scheduler-multinode-038823" [2da95c67-ae74-41db-a746-455fa043f9a7] Running
	I1004 01:06:45.393283  148021 system_pods.go:61] "storage-provisioner" [b4bd2f00-0b17-47da-add0-486f8232ea80] Running
	I1004 01:06:45.393288  148021 system_pods.go:74] duration metric: took 174.083006ms to wait for pod list to return data ...
	I1004 01:06:45.393302  148021 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:06:45.585671  148021 request.go:629] Waited for 192.286724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1004 01:06:45.585751  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1004 01:06:45.585756  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:45.585764  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:45.585778  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:45.588551  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:45.588572  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:45.588578  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:45 GMT
	I1004 01:06:45.588584  148021 round_trippers.go:580]     Audit-Id: db623f0b-4070-4ec2-85ab-db7902e40d28
	I1004 01:06:45.588589  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:45.588594  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:45.588599  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:45.588604  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:45.588609  148021 round_trippers.go:580]     Content-Length: 261
	I1004 01:06:45.588629  148021 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cc30e8ea-fc59-44b4-adeb-db7afac19015","resourceVersion":"335","creationTimestamp":"2023-10-04T01:06:36Z"}}]}
	I1004 01:06:45.588835  148021 default_sa.go:45] found service account: "default"
	I1004 01:06:45.588852  148021 default_sa.go:55] duration metric: took 195.544567ms for default service account to be created ...
	I1004 01:06:45.588860  148021 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:06:45.786318  148021 request.go:629] Waited for 197.382226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:06:45.786385  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:06:45.786390  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:45.786397  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:45.786404  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:45.789790  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:06:45.789810  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:45.789817  148021 round_trippers.go:580]     Audit-Id: 5f206a27-9166-48a2-a046-c7ec223024bc
	I1004 01:06:45.789822  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:45.789827  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:45.789832  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:45.789851  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:45.789857  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:45 GMT
	I1004 01:06:45.791298  148021 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"438","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53996 chars]
	I1004 01:06:45.792928  148021 system_pods.go:86] 8 kube-system pods found
	I1004 01:06:45.792948  148021 system_pods.go:89] "coredns-5dd5756b68-xbln6" [956d98ac-25cb-4d19-a9c7-c3a9682eff67] Running
	I1004 01:06:45.792953  148021 system_pods.go:89] "etcd-multinode-038823" [040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13] Running
	I1004 01:06:45.792957  148021 system_pods.go:89] "kindnet-prsst" [1775280f-c3e2-4162-9287-9b58a90c8f83] Running
	I1004 01:06:45.792961  148021 system_pods.go:89] "kube-apiserver-multinode-038823" [8f46d14f-fac3-4029-af40-ad242d6e93e1] Running
	I1004 01:06:45.792967  148021 system_pods.go:89] "kube-controller-manager-multinode-038823" [ace8ff54-191a-4969-bc58-ad0440f25084] Running
	I1004 01:06:45.792970  148021 system_pods.go:89] "kube-proxy-pz9j4" [36f00e2f-5611-43ae-94b5-d9dde6784128] Running
	I1004 01:06:45.792976  148021 system_pods.go:89] "kube-scheduler-multinode-038823" [2da95c67-ae74-41db-a746-455fa043f9a7] Running
	I1004 01:06:45.792980  148021 system_pods.go:89] "storage-provisioner" [b4bd2f00-0b17-47da-add0-486f8232ea80] Running
	I1004 01:06:45.792988  148021 system_pods.go:126] duration metric: took 204.119508ms to wait for k8s-apps to be running ...
	I1004 01:06:45.792997  148021 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:06:45.793040  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:06:45.807728  148021 system_svc.go:56] duration metric: took 14.720155ms WaitForService to wait for kubelet.
	I1004 01:06:45.807752  148021 kubeadm.go:581] duration metric: took 9.572404713s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:06:45.807769  148021 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:06:45.986224  148021 request.go:629] Waited for 178.348672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1004 01:06:45.986289  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1004 01:06:45.986294  148021 round_trippers.go:469] Request Headers:
	I1004 01:06:45.986302  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:06:45.986308  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:06:45.988987  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:06:45.989007  148021 round_trippers.go:577] Response Headers:
	I1004 01:06:45.989013  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:06:45.989019  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:06:45.989024  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:06:45.989029  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:06:45 GMT
	I1004 01:06:45.989034  148021 round_trippers.go:580]     Audit-Id: e3fe1fe0-bfe1-4e26-8905-766ec5c82599
	I1004 01:06:45.989039  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:06:45.989499  148021 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I1004 01:06:45.989834  148021 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:06:45.989892  148021 node_conditions.go:123] node cpu capacity is 2
	I1004 01:06:45.989903  148021 node_conditions.go:105] duration metric: took 182.129703ms to run NodePressure ...
	I1004 01:06:45.989914  148021 start.go:228] waiting for startup goroutines ...
	I1004 01:06:45.989928  148021 start.go:233] waiting for cluster config update ...
	I1004 01:06:45.989936  148021 start.go:242] writing updated cluster config ...
	I1004 01:06:45.992117  148021 out.go:177] 
	I1004 01:06:45.993700  148021 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:06:45.993768  148021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:06:45.995473  148021 out.go:177] * Starting worker node multinode-038823-m02 in cluster multinode-038823
	I1004 01:06:45.996761  148021 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:06:45.996781  148021 cache.go:57] Caching tarball of preloaded images
	I1004 01:06:45.996885  148021 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:06:45.996897  148021 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:06:45.996954  148021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:06:45.997100  148021 start.go:365] acquiring machines lock for multinode-038823-m02: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:06:45.997140  148021 start.go:369] acquired machines lock for "multinode-038823-m02" in 20.282µs
	I1004 01:06:45.997156  148021 start.go:93] Provisioning new machine with config: &{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:06:45.997216  148021 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1004 01:06:45.998806  148021 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 01:06:45.998900  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:06:45.998939  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:06:46.014312  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1004 01:06:46.014755  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:06:46.015190  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:06:46.015212  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:06:46.015668  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:06:46.015922  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:06:46.016130  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:06:46.016282  148021 start.go:159] libmachine.API.Create for "multinode-038823" (driver="kvm2")
	I1004 01:06:46.016320  148021 client.go:168] LocalClient.Create starting
	I1004 01:06:46.016359  148021 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 01:06:46.016400  148021 main.go:141] libmachine: Decoding PEM data...
	I1004 01:06:46.016422  148021 main.go:141] libmachine: Parsing certificate...
	I1004 01:06:46.016491  148021 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 01:06:46.016518  148021 main.go:141] libmachine: Decoding PEM data...
	I1004 01:06:46.016539  148021 main.go:141] libmachine: Parsing certificate...
	I1004 01:06:46.016567  148021 main.go:141] libmachine: Running pre-create checks...
	I1004 01:06:46.016581  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .PreCreateCheck
	I1004 01:06:46.016772  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetConfigRaw
	I1004 01:06:46.017170  148021 main.go:141] libmachine: Creating machine...
	I1004 01:06:46.017184  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .Create
	I1004 01:06:46.017316  148021 main.go:141] libmachine: (multinode-038823-m02) Creating KVM machine...
	I1004 01:06:46.018630  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found existing default KVM network
	I1004 01:06:46.018801  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found existing private KVM network mk-multinode-038823
	I1004 01:06:46.018882  148021 main.go:141] libmachine: (multinode-038823-m02) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02 ...
	I1004 01:06:46.018933  148021 main.go:141] libmachine: (multinode-038823-m02) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 01:06:46.019018  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:46.018887  148404 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:06:46.019126  148021 main.go:141] libmachine: (multinode-038823-m02) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 01:06:46.243031  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:46.242874  148404 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa...
	I1004 01:06:46.376119  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:46.375979  148404 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/multinode-038823-m02.rawdisk...
	I1004 01:06:46.376175  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Writing magic tar header
	I1004 01:06:46.376195  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Writing SSH key tar header
	I1004 01:06:46.376210  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:46.376135  148404 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02 ...
	I1004 01:06:46.376338  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02
	I1004 01:06:46.376367  148021 main.go:141] libmachine: (multinode-038823-m02) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02 (perms=drwx------)
	I1004 01:06:46.376383  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 01:06:46.376403  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:06:46.376418  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 01:06:46.376435  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 01:06:46.376449  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home/jenkins
	I1004 01:06:46.376465  148021 main.go:141] libmachine: (multinode-038823-m02) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 01:06:46.376483  148021 main.go:141] libmachine: (multinode-038823-m02) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 01:06:46.376500  148021 main.go:141] libmachine: (multinode-038823-m02) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 01:06:46.376515  148021 main.go:141] libmachine: (multinode-038823-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 01:06:46.376528  148021 main.go:141] libmachine: (multinode-038823-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 01:06:46.376544  148021 main.go:141] libmachine: (multinode-038823-m02) Creating domain...
	I1004 01:06:46.376557  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Checking permissions on dir: /home
	I1004 01:06:46.376571  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Skipping /home - not owner
	I1004 01:06:46.377455  148021 main.go:141] libmachine: (multinode-038823-m02) define libvirt domain using xml: 
	I1004 01:06:46.377476  148021 main.go:141] libmachine: (multinode-038823-m02) <domain type='kvm'>
	I1004 01:06:46.377483  148021 main.go:141] libmachine: (multinode-038823-m02)   <name>multinode-038823-m02</name>
	I1004 01:06:46.377494  148021 main.go:141] libmachine: (multinode-038823-m02)   <memory unit='MiB'>2200</memory>
	I1004 01:06:46.377504  148021 main.go:141] libmachine: (multinode-038823-m02)   <vcpu>2</vcpu>
	I1004 01:06:46.377521  148021 main.go:141] libmachine: (multinode-038823-m02)   <features>
	I1004 01:06:46.377530  148021 main.go:141] libmachine: (multinode-038823-m02)     <acpi/>
	I1004 01:06:46.377540  148021 main.go:141] libmachine: (multinode-038823-m02)     <apic/>
	I1004 01:06:46.377566  148021 main.go:141] libmachine: (multinode-038823-m02)     <pae/>
	I1004 01:06:46.377585  148021 main.go:141] libmachine: (multinode-038823-m02)     
	I1004 01:06:46.377615  148021 main.go:141] libmachine: (multinode-038823-m02)   </features>
	I1004 01:06:46.377641  148021 main.go:141] libmachine: (multinode-038823-m02)   <cpu mode='host-passthrough'>
	I1004 01:06:46.377652  148021 main.go:141] libmachine: (multinode-038823-m02)   
	I1004 01:06:46.377665  148021 main.go:141] libmachine: (multinode-038823-m02)   </cpu>
	I1004 01:06:46.377677  148021 main.go:141] libmachine: (multinode-038823-m02)   <os>
	I1004 01:06:46.377685  148021 main.go:141] libmachine: (multinode-038823-m02)     <type>hvm</type>
	I1004 01:06:46.377692  148021 main.go:141] libmachine: (multinode-038823-m02)     <boot dev='cdrom'/>
	I1004 01:06:46.377706  148021 main.go:141] libmachine: (multinode-038823-m02)     <boot dev='hd'/>
	I1004 01:06:46.377722  148021 main.go:141] libmachine: (multinode-038823-m02)     <bootmenu enable='no'/>
	I1004 01:06:46.377738  148021 main.go:141] libmachine: (multinode-038823-m02)   </os>
	I1004 01:06:46.377751  148021 main.go:141] libmachine: (multinode-038823-m02)   <devices>
	I1004 01:06:46.377762  148021 main.go:141] libmachine: (multinode-038823-m02)     <disk type='file' device='cdrom'>
	I1004 01:06:46.377778  148021 main.go:141] libmachine: (multinode-038823-m02)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/boot2docker.iso'/>
	I1004 01:06:46.377786  148021 main.go:141] libmachine: (multinode-038823-m02)       <target dev='hdc' bus='scsi'/>
	I1004 01:06:46.377800  148021 main.go:141] libmachine: (multinode-038823-m02)       <readonly/>
	I1004 01:06:46.377816  148021 main.go:141] libmachine: (multinode-038823-m02)     </disk>
	I1004 01:06:46.377832  148021 main.go:141] libmachine: (multinode-038823-m02)     <disk type='file' device='disk'>
	I1004 01:06:46.377855  148021 main.go:141] libmachine: (multinode-038823-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 01:06:46.377870  148021 main.go:141] libmachine: (multinode-038823-m02)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/multinode-038823-m02.rawdisk'/>
	I1004 01:06:46.377881  148021 main.go:141] libmachine: (multinode-038823-m02)       <target dev='hda' bus='virtio'/>
	I1004 01:06:46.377893  148021 main.go:141] libmachine: (multinode-038823-m02)     </disk>
	I1004 01:06:46.377907  148021 main.go:141] libmachine: (multinode-038823-m02)     <interface type='network'>
	I1004 01:06:46.377921  148021 main.go:141] libmachine: (multinode-038823-m02)       <source network='mk-multinode-038823'/>
	I1004 01:06:46.377934  148021 main.go:141] libmachine: (multinode-038823-m02)       <model type='virtio'/>
	I1004 01:06:46.377950  148021 main.go:141] libmachine: (multinode-038823-m02)     </interface>
	I1004 01:06:46.377965  148021 main.go:141] libmachine: (multinode-038823-m02)     <interface type='network'>
	I1004 01:06:46.377978  148021 main.go:141] libmachine: (multinode-038823-m02)       <source network='default'/>
	I1004 01:06:46.377992  148021 main.go:141] libmachine: (multinode-038823-m02)       <model type='virtio'/>
	I1004 01:06:46.378003  148021 main.go:141] libmachine: (multinode-038823-m02)     </interface>
	I1004 01:06:46.378019  148021 main.go:141] libmachine: (multinode-038823-m02)     <serial type='pty'>
	I1004 01:06:46.378035  148021 main.go:141] libmachine: (multinode-038823-m02)       <target port='0'/>
	I1004 01:06:46.378048  148021 main.go:141] libmachine: (multinode-038823-m02)     </serial>
	I1004 01:06:46.378059  148021 main.go:141] libmachine: (multinode-038823-m02)     <console type='pty'>
	I1004 01:06:46.378074  148021 main.go:141] libmachine: (multinode-038823-m02)       <target type='serial' port='0'/>
	I1004 01:06:46.378086  148021 main.go:141] libmachine: (multinode-038823-m02)     </console>
	I1004 01:06:46.378099  148021 main.go:141] libmachine: (multinode-038823-m02)     <rng model='virtio'>
	I1004 01:06:46.378117  148021 main.go:141] libmachine: (multinode-038823-m02)       <backend model='random'>/dev/random</backend>
	I1004 01:06:46.378130  148021 main.go:141] libmachine: (multinode-038823-m02)     </rng>
	I1004 01:06:46.378153  148021 main.go:141] libmachine: (multinode-038823-m02)     
	I1004 01:06:46.378167  148021 main.go:141] libmachine: (multinode-038823-m02)     
	I1004 01:06:46.378182  148021 main.go:141] libmachine: (multinode-038823-m02)   </devices>
	I1004 01:06:46.378194  148021 main.go:141] libmachine: (multinode-038823-m02) </domain>
	I1004 01:06:46.378203  148021 main.go:141] libmachine: (multinode-038823-m02) 
	I1004 01:06:46.384768  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:26:1b:ed in network default
	I1004 01:06:46.385335  148021 main.go:141] libmachine: (multinode-038823-m02) Ensuring networks are active...
	I1004 01:06:46.385365  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:46.386096  148021 main.go:141] libmachine: (multinode-038823-m02) Ensuring network default is active
	I1004 01:06:46.386418  148021 main.go:141] libmachine: (multinode-038823-m02) Ensuring network mk-multinode-038823 is active
	I1004 01:06:46.386860  148021 main.go:141] libmachine: (multinode-038823-m02) Getting domain xml...
	I1004 01:06:46.387553  148021 main.go:141] libmachine: (multinode-038823-m02) Creating domain...
	I1004 01:06:47.611648  148021 main.go:141] libmachine: (multinode-038823-m02) Waiting to get IP...
	I1004 01:06:47.612384  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:47.612744  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:47.612775  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:47.612716  148404 retry.go:31] will retry after 229.62403ms: waiting for machine to come up
	I1004 01:06:47.844212  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:47.844678  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:47.844702  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:47.844623  148404 retry.go:31] will retry after 238.354578ms: waiting for machine to come up
	I1004 01:06:48.085057  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:48.085567  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:48.085609  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:48.085514  148404 retry.go:31] will retry after 455.231133ms: waiting for machine to come up
	I1004 01:06:48.542855  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:48.543428  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:48.543451  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:48.543380  148404 retry.go:31] will retry after 551.888976ms: waiting for machine to come up
	I1004 01:06:49.097274  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:49.097655  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:49.097685  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:49.097601  148404 retry.go:31] will retry after 690.338159ms: waiting for machine to come up
	I1004 01:06:49.789542  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:49.790168  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:49.790205  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:49.790121  148404 retry.go:31] will retry after 827.457917ms: waiting for machine to come up
	I1004 01:06:50.618836  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:50.619263  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:50.619305  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:50.619231  148404 retry.go:31] will retry after 858.752335ms: waiting for machine to come up
	I1004 01:06:51.479873  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:51.480355  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:51.480383  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:51.480300  148404 retry.go:31] will retry after 1.346885497s: waiting for machine to come up
	I1004 01:06:52.828742  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:52.829063  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:52.829093  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:52.829021  148404 retry.go:31] will retry after 1.814695926s: waiting for machine to come up
	I1004 01:06:54.646022  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:54.646548  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:54.646577  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:54.646495  148404 retry.go:31] will retry after 1.897044879s: waiting for machine to come up
	I1004 01:06:56.545345  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:56.545861  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:56.545892  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:56.545800  148404 retry.go:31] will retry after 1.968776428s: waiting for machine to come up
	I1004 01:06:58.516996  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:06:58.517811  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:06:58.517865  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:06:58.517747  148404 retry.go:31] will retry after 3.016275185s: waiting for machine to come up
	I1004 01:07:01.535783  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:01.536235  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:07:01.536268  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:07:01.536170  148404 retry.go:31] will retry after 4.071753779s: waiting for machine to come up
	I1004 01:07:05.612467  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:05.612912  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find current IP address of domain multinode-038823-m02 in network mk-multinode-038823
	I1004 01:07:05.612946  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | I1004 01:07:05.612864  148404 retry.go:31] will retry after 4.823249032s: waiting for machine to come up
	I1004 01:07:10.441046  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.441556  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has current primary IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.441601  148021 main.go:141] libmachine: (multinode-038823-m02) Found IP for machine: 192.168.39.181
	I1004 01:07:10.441616  148021 main.go:141] libmachine: (multinode-038823-m02) Reserving static IP address...
	I1004 01:07:10.441992  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | unable to find host DHCP lease matching {name: "multinode-038823-m02", mac: "52:54:00:57:fe:89", ip: "192.168.39.181"} in network mk-multinode-038823
	I1004 01:07:10.515927  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Getting to WaitForSSH function...
	I1004 01:07:10.515966  148021 main.go:141] libmachine: (multinode-038823-m02) Reserved static IP address: 192.168.39.181
	I1004 01:07:10.516019  148021 main.go:141] libmachine: (multinode-038823-m02) Waiting for SSH to be available...
	I1004 01:07:10.518841  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.519309  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:10.519347  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.519510  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Using SSH client type: external
	I1004 01:07:10.519540  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa (-rw-------)
	I1004 01:07:10.519575  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.181 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:07:10.519591  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | About to run SSH command:
	I1004 01:07:10.519607  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | exit 0
	I1004 01:07:10.609775  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | SSH cmd err, output: <nil>: 
	I1004 01:07:10.610087  148021 main.go:141] libmachine: (multinode-038823-m02) KVM machine creation complete!
	I1004 01:07:10.610447  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetConfigRaw
	I1004 01:07:10.611031  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:10.611248  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:10.611401  148021 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 01:07:10.611418  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetState
	I1004 01:07:10.612692  148021 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 01:07:10.612711  148021 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 01:07:10.612727  148021 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 01:07:10.612737  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:10.615321  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.615694  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:10.615736  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.615805  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:10.616000  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.616151  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.616295  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:10.616447  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:07:10.616812  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:07:10.616825  148021 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 01:07:10.725462  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:07:10.725489  148021 main.go:141] libmachine: Detecting the provisioner...
	I1004 01:07:10.725502  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:10.728051  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.728420  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:10.728452  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.728595  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:10.728822  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.728994  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.729170  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:10.729344  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:07:10.729715  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:07:10.729732  148021 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 01:07:10.839082  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 01:07:10.839180  148021 main.go:141] libmachine: found compatible host: buildroot
	I1004 01:07:10.839193  148021 main.go:141] libmachine: Provisioning with buildroot...
	I1004 01:07:10.839208  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:07:10.839499  148021 buildroot.go:166] provisioning hostname "multinode-038823-m02"
	I1004 01:07:10.839529  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:07:10.839722  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:10.842255  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.842681  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:10.842712  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.842897  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:10.843081  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.843298  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.843427  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:10.843598  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:07:10.843916  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:07:10.843932  148021 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-038823-m02 && echo "multinode-038823-m02" | sudo tee /etc/hostname
	I1004 01:07:10.967910  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-038823-m02
	
	I1004 01:07:10.967950  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:10.970912  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.971281  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:10.971316  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:10.971546  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:10.971771  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.971915  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:10.972040  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:10.972197  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:07:10.972589  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:07:10.972609  148021 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-038823-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-038823-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-038823-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:07:11.089956  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:07:11.089993  148021 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:07:11.090017  148021 buildroot.go:174] setting up certificates
	I1004 01:07:11.090029  148021 provision.go:83] configureAuth start
	I1004 01:07:11.090045  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:07:11.090342  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:07:11.093239  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.093600  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.093634  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.093706  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:11.096032  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.096389  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.096422  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.096523  148021 provision.go:138] copyHostCerts
	I1004 01:07:11.096555  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:07:11.096588  148021 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:07:11.096596  148021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:07:11.096658  148021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:07:11.096766  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:07:11.096792  148021 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:07:11.096798  148021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:07:11.096827  148021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:07:11.096871  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:07:11.096886  148021 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:07:11.096892  148021 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:07:11.096912  148021 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:07:11.096958  148021 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.multinode-038823-m02 san=[192.168.39.181 192.168.39.181 localhost 127.0.0.1 minikube multinode-038823-m02]
	I1004 01:07:11.250848  148021 provision.go:172] copyRemoteCerts
	I1004 01:07:11.250909  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:07:11.250936  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:11.253758  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.254135  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.254170  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.254369  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:11.254557  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:11.254754  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:11.254883  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:07:11.341469  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 01:07:11.341542  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:07:11.370143  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 01:07:11.370220  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1004 01:07:11.394381  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 01:07:11.394460  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 01:07:11.416867  148021 provision.go:86] duration metric: configureAuth took 326.818204ms
	I1004 01:07:11.416907  148021 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:07:11.417134  148021 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:07:11.417261  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:11.420251  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.420739  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.420775  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.420953  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:11.421207  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:11.421449  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:11.421654  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:11.421857  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:07:11.422198  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:07:11.422214  148021 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:07:11.739493  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:07:11.739528  148021 main.go:141] libmachine: Checking connection to Docker...
	I1004 01:07:11.739544  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetURL
	I1004 01:07:11.740867  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | Using libvirt version 6000000
	I1004 01:07:11.743345  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.743689  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.743723  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.743853  148021 main.go:141] libmachine: Docker is up and running!
	I1004 01:07:11.743866  148021 main.go:141] libmachine: Reticulating splines...
	I1004 01:07:11.743874  148021 client.go:171] LocalClient.Create took 25.727542218s
	I1004 01:07:11.743904  148021 start.go:167] duration metric: libmachine.API.Create for "multinode-038823" took 25.72762451s
	I1004 01:07:11.743916  148021 start.go:300] post-start starting for "multinode-038823-m02" (driver="kvm2")
	I1004 01:07:11.743929  148021 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:07:11.743951  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:11.744226  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:07:11.744252  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:11.746463  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.746774  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.746806  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.746958  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:11.747162  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:11.747358  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:11.747539  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:07:11.831171  148021 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:07:11.835255  148021 command_runner.go:130] > NAME=Buildroot
	I1004 01:07:11.835276  148021 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1004 01:07:11.835283  148021 command_runner.go:130] > ID=buildroot
	I1004 01:07:11.835289  148021 command_runner.go:130] > VERSION_ID=2021.02.12
	I1004 01:07:11.835296  148021 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1004 01:07:11.835444  148021 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:07:11.835468  148021 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:07:11.835533  148021 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:07:11.835604  148021 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:07:11.835615  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /etc/ssl/certs/1355652.pem
	I1004 01:07:11.835692  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:07:11.843798  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:07:11.867473  148021 start.go:303] post-start completed in 123.541972ms
	I1004 01:07:11.867532  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetConfigRaw
	I1004 01:07:11.868106  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:07:11.871213  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.871592  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.871616  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.871874  148021 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:07:11.872077  148021 start.go:128] duration metric: createHost completed in 25.874849816s
	I1004 01:07:11.872101  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:11.874360  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.874712  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.874747  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.874923  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:11.875111  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:11.875323  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:11.875483  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:11.875641  148021 main.go:141] libmachine: Using SSH client type: native
	I1004 01:07:11.875964  148021 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:07:11.875974  148021 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:07:11.987437  148021 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696381631.969968927
	
	I1004 01:07:11.987461  148021 fix.go:206] guest clock: 1696381631.969968927
	I1004 01:07:11.987468  148021 fix.go:219] Guest: 2023-10-04 01:07:11.969968927 +0000 UTC Remote: 2023-10-04 01:07:11.872089619 +0000 UTC m=+94.484733271 (delta=97.879308ms)
	I1004 01:07:11.987482  148021 fix.go:190] guest clock delta is within tolerance: 97.879308ms
	I1004 01:07:11.987487  148021 start.go:83] releasing machines lock for "multinode-038823-m02", held for 25.990338668s
	I1004 01:07:11.987506  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:11.987801  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:07:11.990311  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.990700  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:11.990739  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:11.993241  148021 out.go:177] * Found network options:
	I1004 01:07:11.994892  148021 out.go:177]   - NO_PROXY=192.168.39.212
	W1004 01:07:11.996303  148021 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 01:07:11.996334  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:11.996925  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:11.997120  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:07:11.997218  148021 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:07:11.997258  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	W1004 01:07:11.997328  148021 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 01:07:11.997416  148021 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:07:11.997444  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:07:12.000058  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:12.000255  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:12.000410  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:12.000440  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:12.000642  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:12.000758  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:12.000797  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:12.000857  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:12.000959  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:07:12.001043  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:12.001116  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:07:12.001216  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:07:12.001252  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:07:12.001372  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:07:12.236164  148021 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 01:07:12.236274  148021 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 01:07:12.244498  148021 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 01:07:12.245066  148021 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:07:12.245149  148021 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:07:12.260419  148021 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1004 01:07:12.260489  148021 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:07:12.260501  148021 start.go:469] detecting cgroup driver to use...
	I1004 01:07:12.260578  148021 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:07:12.275472  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:07:12.288732  148021 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:07:12.288792  148021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:07:12.303000  148021 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:07:12.317747  148021 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:07:12.425889  148021 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1004 01:07:12.426048  148021 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:07:12.551805  148021 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1004 01:07:12.551851  148021 docker.go:213] disabling docker service ...
	I1004 01:07:12.551908  148021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:07:12.566872  148021 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:07:12.578932  148021 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1004 01:07:12.579258  148021 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:07:12.594248  148021 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1004 01:07:12.703445  148021 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:07:12.832794  148021 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1004 01:07:12.832824  148021 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1004 01:07:12.832882  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:07:12.847245  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:07:12.865336  148021 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 01:07:12.865793  148021 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:07:12.865860  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:07:12.877717  148021 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:07:12.877778  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:07:12.889127  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:07:12.900589  148021 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:07:12.911203  148021 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:07:12.922042  148021 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:07:12.931484  148021 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:07:12.931527  148021 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:07:12.931572  148021 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:07:12.944540  148021 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:07:12.954248  148021 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:07:13.076379  148021 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:07:13.258231  148021 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:07:13.258302  148021 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:07:13.263790  148021 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 01:07:13.263821  148021 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 01:07:13.263834  148021 command_runner.go:130] > Device: 16h/22d	Inode: 703         Links: 1
	I1004 01:07:13.263845  148021 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:07:13.263854  148021 command_runner.go:130] > Access: 2023-10-04 01:07:13.221338376 +0000
	I1004 01:07:13.263865  148021 command_runner.go:130] > Modify: 2023-10-04 01:07:13.221338376 +0000
	I1004 01:07:13.263873  148021 command_runner.go:130] > Change: 2023-10-04 01:07:13.222340109 +0000
	I1004 01:07:13.263882  148021 command_runner.go:130] >  Birth: -
	I1004 01:07:13.263906  148021 start.go:537] Will wait 60s for crictl version
	I1004 01:07:13.263963  148021 ssh_runner.go:195] Run: which crictl
	I1004 01:07:13.267856  148021 command_runner.go:130] > /usr/bin/crictl
	I1004 01:07:13.267941  148021 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:07:13.304531  148021 command_runner.go:130] > Version:  0.1.0
	I1004 01:07:13.304560  148021 command_runner.go:130] > RuntimeName:  cri-o
	I1004 01:07:13.304597  148021 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1004 01:07:13.304936  148021 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 01:07:13.306836  148021 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:07:13.306929  148021 ssh_runner.go:195] Run: crio --version
	I1004 01:07:13.353745  148021 command_runner.go:130] > crio version 1.24.1
	I1004 01:07:13.353772  148021 command_runner.go:130] > Version:          1.24.1
	I1004 01:07:13.353781  148021 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:07:13.353788  148021 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:07:13.353797  148021 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:07:13.353805  148021 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:07:13.353812  148021 command_runner.go:130] > Compiler:         gc
	I1004 01:07:13.353828  148021 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:07:13.353850  148021 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:07:13.353867  148021 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:07:13.353875  148021 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:07:13.353882  148021 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:07:13.353968  148021 ssh_runner.go:195] Run: crio --version
	I1004 01:07:13.401741  148021 command_runner.go:130] > crio version 1.24.1
	I1004 01:07:13.401784  148021 command_runner.go:130] > Version:          1.24.1
	I1004 01:07:13.401795  148021 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:07:13.401802  148021 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:07:13.401812  148021 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:07:13.401820  148021 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:07:13.401827  148021 command_runner.go:130] > Compiler:         gc
	I1004 01:07:13.401835  148021 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:07:13.401858  148021 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:07:13.401874  148021 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:07:13.401884  148021 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:07:13.401890  148021 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:07:13.403912  148021 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:07:13.405276  148021 out.go:177]   - env NO_PROXY=192.168.39.212
	I1004 01:07:13.406486  148021 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:07:13.409165  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:13.409534  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:07:13.409566  148021 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:07:13.409893  148021 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 01:07:13.414189  148021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:07:13.428662  148021 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823 for IP: 192.168.39.181
	I1004 01:07:13.428697  148021 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:07:13.428841  148021 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:07:13.428879  148021 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:07:13.428891  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 01:07:13.428906  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 01:07:13.428919  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 01:07:13.428929  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 01:07:13.428978  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:07:13.429009  148021 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:07:13.429025  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:07:13.429052  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:07:13.429077  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:07:13.429101  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:07:13.429143  148021 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:07:13.429167  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:07:13.429180  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem -> /usr/share/ca-certificates/135565.pem
	I1004 01:07:13.429192  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /usr/share/ca-certificates/1355652.pem
	I1004 01:07:13.429526  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:07:13.456081  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:07:13.482349  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:07:13.508850  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:07:13.535911  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:07:13.564896  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:07:13.591401  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:07:13.615510  148021 ssh_runner.go:195] Run: openssl version
	I1004 01:07:13.621293  148021 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1004 01:07:13.621646  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:07:13.631761  148021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:07:13.636407  148021 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:07:13.636635  148021 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:07:13.636717  148021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:07:13.642400  148021 command_runner.go:130] > b5213941
	I1004 01:07:13.642596  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:07:13.652275  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:07:13.662666  148021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:07:13.667441  148021 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:07:13.667495  148021 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:07:13.667552  148021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:07:13.673401  148021 command_runner.go:130] > 51391683
	I1004 01:07:13.673495  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:07:13.683947  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:07:13.694740  148021 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:07:13.699555  148021 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:07:13.699741  148021 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:07:13.699799  148021 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:07:13.705378  148021 command_runner.go:130] > 3ec20f2e
	I1004 01:07:13.705649  148021 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:07:13.715354  148021 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:07:13.719805  148021 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:07:13.719854  148021 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:07:13.719947  148021 ssh_runner.go:195] Run: crio config
	I1004 01:07:13.776254  148021 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 01:07:13.776283  148021 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 01:07:13.776294  148021 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 01:07:13.776299  148021 command_runner.go:130] > #
	I1004 01:07:13.776317  148021 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 01:07:13.776328  148021 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 01:07:13.776339  148021 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 01:07:13.776349  148021 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 01:07:13.776356  148021 command_runner.go:130] > # reload'.
	I1004 01:07:13.776364  148021 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 01:07:13.776374  148021 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 01:07:13.776389  148021 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 01:07:13.776408  148021 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 01:07:13.776414  148021 command_runner.go:130] > [crio]
	I1004 01:07:13.776426  148021 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 01:07:13.776437  148021 command_runner.go:130] > # containers images, in this directory.
	I1004 01:07:13.776448  148021 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 01:07:13.776462  148021 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 01:07:13.776474  148021 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 01:07:13.776485  148021 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 01:07:13.776498  148021 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 01:07:13.776509  148021 command_runner.go:130] > storage_driver = "overlay"
	I1004 01:07:13.776522  148021 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 01:07:13.776535  148021 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 01:07:13.776544  148021 command_runner.go:130] > storage_option = [
	I1004 01:07:13.776550  148021 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 01:07:13.776556  148021 command_runner.go:130] > ]
	I1004 01:07:13.776570  148021 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 01:07:13.776582  148021 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 01:07:13.776593  148021 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 01:07:13.776603  148021 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 01:07:13.776616  148021 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 01:07:13.776627  148021 command_runner.go:130] > # always happen on a node reboot
	I1004 01:07:13.776633  148021 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 01:07:13.776643  148021 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 01:07:13.776660  148021 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 01:07:13.776677  148021 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 01:07:13.776689  148021 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1004 01:07:13.776704  148021 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 01:07:13.776719  148021 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 01:07:13.776727  148021 command_runner.go:130] > # internal_wipe = true
	I1004 01:07:13.776736  148021 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 01:07:13.776750  148021 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 01:07:13.776761  148021 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 01:07:13.776774  148021 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 01:07:13.776787  148021 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 01:07:13.776795  148021 command_runner.go:130] > [crio.api]
	I1004 01:07:13.776804  148021 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 01:07:13.776812  148021 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 01:07:13.776819  148021 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 01:07:13.776824  148021 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 01:07:13.776834  148021 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 01:07:13.776844  148021 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 01:07:13.776854  148021 command_runner.go:130] > # stream_port = "0"
	I1004 01:07:13.776868  148021 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 01:07:13.776876  148021 command_runner.go:130] > # stream_enable_tls = false
	I1004 01:07:13.776889  148021 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 01:07:13.776899  148021 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 01:07:13.776908  148021 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 01:07:13.776921  148021 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 01:07:13.776931  148021 command_runner.go:130] > # minutes.
	I1004 01:07:13.776939  148021 command_runner.go:130] > # stream_tls_cert = ""
	I1004 01:07:13.776953  148021 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 01:07:13.776967  148021 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 01:07:13.776977  148021 command_runner.go:130] > # stream_tls_key = ""
	I1004 01:07:13.776990  148021 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 01:07:13.777002  148021 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 01:07:13.777011  148021 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 01:07:13.777041  148021 command_runner.go:130] > # stream_tls_ca = ""
	I1004 01:07:13.777059  148021 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:07:13.777067  148021 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 01:07:13.777080  148021 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:07:13.777091  148021 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 01:07:13.777110  148021 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 01:07:13.777122  148021 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 01:07:13.777130  148021 command_runner.go:130] > [crio.runtime]
	I1004 01:07:13.777145  148021 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 01:07:13.777158  148021 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 01:07:13.777166  148021 command_runner.go:130] > # "nofile=1024:2048"
	I1004 01:07:13.777179  148021 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 01:07:13.777189  148021 command_runner.go:130] > # default_ulimits = [
	I1004 01:07:13.777199  148021 command_runner.go:130] > # ]
	I1004 01:07:13.777209  148021 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 01:07:13.777219  148021 command_runner.go:130] > # no_pivot = false
	I1004 01:07:13.777230  148021 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 01:07:13.777249  148021 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 01:07:13.777261  148021 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 01:07:13.777274  148021 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 01:07:13.777282  148021 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 01:07:13.777296  148021 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:07:13.777314  148021 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 01:07:13.777330  148021 command_runner.go:130] > # Cgroup setting for conmon
	I1004 01:07:13.777343  148021 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 01:07:13.777350  148021 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 01:07:13.777363  148021 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 01:07:13.777380  148021 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 01:07:13.777397  148021 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:07:13.777404  148021 command_runner.go:130] > conmon_env = [
	I1004 01:07:13.777415  148021 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 01:07:13.777420  148021 command_runner.go:130] > ]
	I1004 01:07:13.777429  148021 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 01:07:13.777437  148021 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 01:07:13.777447  148021 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 01:07:13.777455  148021 command_runner.go:130] > # default_env = [
	I1004 01:07:13.777460  148021 command_runner.go:130] > # ]
	I1004 01:07:13.777475  148021 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 01:07:13.777485  148021 command_runner.go:130] > # selinux = false
	I1004 01:07:13.777496  148021 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 01:07:13.777510  148021 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 01:07:13.777520  148021 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 01:07:13.777529  148021 command_runner.go:130] > # seccomp_profile = ""
	I1004 01:07:13.777541  148021 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 01:07:13.777551  148021 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 01:07:13.777565  148021 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 01:07:13.777577  148021 command_runner.go:130] > # which might increase security.
	I1004 01:07:13.777588  148021 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 01:07:13.777601  148021 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 01:07:13.777615  148021 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 01:07:13.777627  148021 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 01:07:13.777640  148021 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 01:07:13.777648  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:07:13.777660  148021 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 01:07:13.777670  148021 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 01:07:13.777682  148021 command_runner.go:130] > # the cgroup blockio controller.
	I1004 01:07:13.777689  148021 command_runner.go:130] > # blockio_config_file = ""
	I1004 01:07:13.777703  148021 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 01:07:13.777714  148021 command_runner.go:130] > # irqbalance daemon.
	I1004 01:07:13.777724  148021 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 01:07:13.777735  148021 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 01:07:13.777744  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:07:13.777753  148021 command_runner.go:130] > # rdt_config_file = ""
	I1004 01:07:13.777762  148021 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 01:07:13.777776  148021 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 01:07:13.777783  148021 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 01:07:13.777791  148021 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 01:07:13.777797  148021 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 01:07:13.777804  148021 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 01:07:13.777808  148021 command_runner.go:130] > # will be added.
	I1004 01:07:13.777812  148021 command_runner.go:130] > # default_capabilities = [
	I1004 01:07:13.777819  148021 command_runner.go:130] > # 	"CHOWN",
	I1004 01:07:13.777823  148021 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 01:07:13.777826  148021 command_runner.go:130] > # 	"FSETID",
	I1004 01:07:13.777834  148021 command_runner.go:130] > # 	"FOWNER",
	I1004 01:07:13.777857  148021 command_runner.go:130] > # 	"SETGID",
	I1004 01:07:13.777868  148021 command_runner.go:130] > # 	"SETUID",
	I1004 01:07:13.777875  148021 command_runner.go:130] > # 	"SETPCAP",
	I1004 01:07:13.777882  148021 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 01:07:13.777890  148021 command_runner.go:130] > # 	"KILL",
	I1004 01:07:13.777900  148021 command_runner.go:130] > # ]
	I1004 01:07:13.777910  148021 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 01:07:13.777923  148021 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:07:13.777933  148021 command_runner.go:130] > # default_sysctls = [
	I1004 01:07:13.777939  148021 command_runner.go:130] > # ]
	I1004 01:07:13.777948  148021 command_runner.go:130] > # List of devices on the host that a
	I1004 01:07:13.777990  148021 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 01:07:13.778001  148021 command_runner.go:130] > # allowed_devices = [
	I1004 01:07:13.778008  148021 command_runner.go:130] > # 	"/dev/fuse",
	I1004 01:07:13.778016  148021 command_runner.go:130] > # ]
	I1004 01:07:13.778022  148021 command_runner.go:130] > # List of additional devices. specified as
	I1004 01:07:13.778037  148021 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 01:07:13.778049  148021 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 01:07:13.778075  148021 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:07:13.778086  148021 command_runner.go:130] > # additional_devices = [
	I1004 01:07:13.778094  148021 command_runner.go:130] > # ]
	I1004 01:07:13.778103  148021 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 01:07:13.778112  148021 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 01:07:13.778117  148021 command_runner.go:130] > # 	"/etc/cdi",
	I1004 01:07:13.778126  148021 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 01:07:13.778131  148021 command_runner.go:130] > # ]
	I1004 01:07:13.778146  148021 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 01:07:13.778157  148021 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 01:07:13.778167  148021 command_runner.go:130] > # Defaults to false.
	I1004 01:07:13.778176  148021 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 01:07:13.778189  148021 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 01:07:13.778203  148021 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 01:07:13.778211  148021 command_runner.go:130] > # hooks_dir = [
	I1004 01:07:13.778216  148021 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 01:07:13.778224  148021 command_runner.go:130] > # ]
	I1004 01:07:13.778235  148021 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 01:07:13.778249  148021 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 01:07:13.778258  148021 command_runner.go:130] > # its default mounts from the following two files:
	I1004 01:07:13.778267  148021 command_runner.go:130] > #
	I1004 01:07:13.778279  148021 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 01:07:13.778301  148021 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 01:07:13.778321  148021 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 01:07:13.778329  148021 command_runner.go:130] > #
	I1004 01:07:13.778340  148021 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 01:07:13.778355  148021 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 01:07:13.778366  148021 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 01:07:13.778378  148021 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 01:07:13.778386  148021 command_runner.go:130] > #
	I1004 01:07:13.778394  148021 command_runner.go:130] > # default_mounts_file = ""
	I1004 01:07:13.778406  148021 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 01:07:13.778420  148021 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 01:07:13.778430  148021 command_runner.go:130] > pids_limit = 1024
	I1004 01:07:13.778440  148021 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 01:07:13.778455  148021 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 01:07:13.778466  148021 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 01:07:13.778483  148021 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 01:07:13.778493  148021 command_runner.go:130] > # log_size_max = -1
	I1004 01:07:13.778505  148021 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1004 01:07:13.778515  148021 command_runner.go:130] > # log_to_journald = false
	I1004 01:07:13.778524  148021 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 01:07:13.778533  148021 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 01:07:13.778547  148021 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 01:07:13.778560  148021 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 01:07:13.778570  148021 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 01:07:13.778581  148021 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 01:07:13.778594  148021 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 01:07:13.778604  148021 command_runner.go:130] > # read_only = false
	I1004 01:07:13.778615  148021 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 01:07:13.778627  148021 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 01:07:13.778634  148021 command_runner.go:130] > # live configuration reload.
	I1004 01:07:13.778640  148021 command_runner.go:130] > # log_level = "info"
	I1004 01:07:13.778653  148021 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 01:07:13.778666  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:07:13.778676  148021 command_runner.go:130] > # log_filter = ""
	I1004 01:07:13.778689  148021 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 01:07:13.778702  148021 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 01:07:13.778711  148021 command_runner.go:130] > # separated by comma.
	I1004 01:07:13.778716  148021 command_runner.go:130] > # uid_mappings = ""
	I1004 01:07:13.778726  148021 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 01:07:13.778740  148021 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 01:07:13.778751  148021 command_runner.go:130] > # separated by comma.
	I1004 01:07:13.778758  148021 command_runner.go:130] > # gid_mappings = ""
	I1004 01:07:13.778772  148021 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 01:07:13.778783  148021 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:07:13.778795  148021 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:07:13.778804  148021 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 01:07:13.778814  148021 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 01:07:13.778825  148021 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:07:13.778834  148021 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:07:13.778845  148021 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 01:07:13.778859  148021 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 01:07:13.778873  148021 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 01:07:13.778906  148021 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 01:07:13.778913  148021 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 01:07:13.778922  148021 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 01:07:13.778941  148021 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 01:07:13.778953  148021 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 01:07:13.778965  148021 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 01:07:13.778975  148021 command_runner.go:130] > drop_infra_ctr = false
	I1004 01:07:13.778986  148021 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 01:07:13.778996  148021 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 01:07:13.779005  148021 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 01:07:13.779015  148021 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 01:07:13.779030  148021 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 01:07:13.779040  148021 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 01:07:13.779051  148021 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 01:07:13.779065  148021 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 01:07:13.779076  148021 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 01:07:13.779090  148021 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 01:07:13.779101  148021 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1004 01:07:13.779110  148021 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1004 01:07:13.779120  148021 command_runner.go:130] > # default_runtime = "runc"
	I1004 01:07:13.779129  148021 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 01:07:13.779145  148021 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 01:07:13.779161  148021 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1004 01:07:13.779173  148021 command_runner.go:130] > # creation as a file is not desired either.
	I1004 01:07:13.779187  148021 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 01:07:13.779197  148021 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 01:07:13.779210  148021 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 01:07:13.779220  148021 command_runner.go:130] > # ]
	I1004 01:07:13.779230  148021 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 01:07:13.779246  148021 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 01:07:13.779260  148021 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1004 01:07:13.779271  148021 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1004 01:07:13.779277  148021 command_runner.go:130] > #
	I1004 01:07:13.779285  148021 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1004 01:07:13.779301  148021 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1004 01:07:13.779310  148021 command_runner.go:130] > #  runtime_type = "oci"
	I1004 01:07:13.779322  148021 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1004 01:07:13.779333  148021 command_runner.go:130] > #  privileged_without_host_devices = false
	I1004 01:07:13.779343  148021 command_runner.go:130] > #  allowed_annotations = []
	I1004 01:07:13.779353  148021 command_runner.go:130] > # Where:
	I1004 01:07:13.779362  148021 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1004 01:07:13.779372  148021 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1004 01:07:13.779383  148021 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 01:07:13.779397  148021 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 01:07:13.779408  148021 command_runner.go:130] > #   in $PATH.
	I1004 01:07:13.779421  148021 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1004 01:07:13.779432  148021 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 01:07:13.779446  148021 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1004 01:07:13.779455  148021 command_runner.go:130] > #   state.
	I1004 01:07:13.779463  148021 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 01:07:13.779474  148021 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 01:07:13.779488  148021 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 01:07:13.779501  148021 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 01:07:13.779515  148021 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 01:07:13.779527  148021 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 01:07:13.779538  148021 command_runner.go:130] > #   The currently recognized values are:
	I1004 01:07:13.779550  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 01:07:13.779559  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 01:07:13.779569  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 01:07:13.779583  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 01:07:13.779596  148021 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 01:07:13.779612  148021 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 01:07:13.779625  148021 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 01:07:13.779639  148021 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1004 01:07:13.779649  148021 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 01:07:13.779654  148021 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 01:07:13.779664  148021 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 01:07:13.779671  148021 command_runner.go:130] > runtime_type = "oci"
	I1004 01:07:13.779702  148021 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 01:07:13.779712  148021 command_runner.go:130] > runtime_config_path = ""
	I1004 01:07:13.779721  148021 command_runner.go:130] > monitor_path = ""
	I1004 01:07:13.779731  148021 command_runner.go:130] > monitor_cgroup = ""
	I1004 01:07:13.779741  148021 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 01:07:13.779753  148021 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1004 01:07:13.779759  148021 command_runner.go:130] > # running containers
	I1004 01:07:13.779767  148021 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1004 01:07:13.779782  148021 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1004 01:07:13.779815  148021 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1004 01:07:13.779829  148021 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1004 01:07:13.779841  148021 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1004 01:07:13.779849  148021 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1004 01:07:13.779855  148021 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1004 01:07:13.779866  148021 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1004 01:07:13.779878  148021 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1004 01:07:13.779886  148021 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1004 01:07:13.779900  148021 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 01:07:13.779912  148021 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 01:07:13.779926  148021 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 01:07:13.779941  148021 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 01:07:13.779952  148021 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 01:07:13.779964  148021 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 01:07:13.779983  148021 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 01:07:13.780000  148021 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 01:07:13.780012  148021 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 01:07:13.780027  148021 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 01:07:13.780037  148021 command_runner.go:130] > # Example:
	I1004 01:07:13.780043  148021 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 01:07:13.780051  148021 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 01:07:13.780059  148021 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 01:07:13.780072  148021 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 01:07:13.780084  148021 command_runner.go:130] > # cpuset = 0
	I1004 01:07:13.780094  148021 command_runner.go:130] > # cpushares = "0-1"
	I1004 01:07:13.780104  148021 command_runner.go:130] > # Where:
	I1004 01:07:13.780112  148021 command_runner.go:130] > # The workload name is workload-type.
	I1004 01:07:13.780127  148021 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 01:07:13.780136  148021 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 01:07:13.780144  148021 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 01:07:13.780160  148021 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 01:07:13.780173  148021 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 01:07:13.780179  148021 command_runner.go:130] > # 
	I1004 01:07:13.780194  148021 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 01:07:13.780204  148021 command_runner.go:130] > #
	I1004 01:07:13.780214  148021 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 01:07:13.780227  148021 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 01:07:13.780239  148021 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 01:07:13.780248  148021 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 01:07:13.780258  148021 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 01:07:13.780268  148021 command_runner.go:130] > [crio.image]
	I1004 01:07:13.780279  148021 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 01:07:13.780294  148021 command_runner.go:130] > # default_transport = "docker://"
	I1004 01:07:13.780307  148021 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 01:07:13.780321  148021 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:07:13.780332  148021 command_runner.go:130] > # global_auth_file = ""
	I1004 01:07:13.780341  148021 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 01:07:13.780348  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:07:13.780359  148021 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1004 01:07:13.780373  148021 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 01:07:13.780384  148021 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:07:13.780396  148021 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:07:13.780406  148021 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 01:07:13.780418  148021 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 01:07:13.780431  148021 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 01:07:13.780441  148021 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 01:07:13.780452  148021 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 01:07:13.780463  148021 command_runner.go:130] > # pause_command = "/pause"
	I1004 01:07:13.780474  148021 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 01:07:13.780488  148021 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 01:07:13.780501  148021 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 01:07:13.780530  148021 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 01:07:13.780542  148021 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 01:07:13.780553  148021 command_runner.go:130] > # signature_policy = ""
	I1004 01:07:13.780567  148021 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 01:07:13.780581  148021 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 01:07:13.780591  148021 command_runner.go:130] > # changing them here.
	I1004 01:07:13.780601  148021 command_runner.go:130] > # insecure_registries = [
	I1004 01:07:13.780615  148021 command_runner.go:130] > # ]
	I1004 01:07:13.780622  148021 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 01:07:13.780636  148021 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 01:07:13.780651  148021 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 01:07:13.780664  148021 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 01:07:13.780675  148021 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 01:07:13.780689  148021 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 01:07:13.780698  148021 command_runner.go:130] > # CNI plugins.
	I1004 01:07:13.780705  148021 command_runner.go:130] > [crio.network]
	I1004 01:07:13.780717  148021 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 01:07:13.780726  148021 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 01:07:13.780733  148021 command_runner.go:130] > # cni_default_network = ""
	I1004 01:07:13.780746  148021 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 01:07:13.780755  148021 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 01:07:13.780768  148021 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 01:07:13.780776  148021 command_runner.go:130] > # plugin_dirs = [
	I1004 01:07:13.780783  148021 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 01:07:13.780793  148021 command_runner.go:130] > # ]
	I1004 01:07:13.780803  148021 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 01:07:13.780812  148021 command_runner.go:130] > [crio.metrics]
	I1004 01:07:13.780818  148021 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 01:07:13.780826  148021 command_runner.go:130] > enable_metrics = true
	I1004 01:07:13.780834  148021 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 01:07:13.780846  148021 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 01:07:13.780858  148021 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 01:07:13.780872  148021 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 01:07:13.780885  148021 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 01:07:13.780895  148021 command_runner.go:130] > # metrics_collectors = [
	I1004 01:07:13.780905  148021 command_runner.go:130] > # 	"operations",
	I1004 01:07:13.780913  148021 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 01:07:13.780920  148021 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 01:07:13.780927  148021 command_runner.go:130] > # 	"operations_errors",
	I1004 01:07:13.780937  148021 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 01:07:13.780948  148021 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 01:07:13.780960  148021 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 01:07:13.780971  148021 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 01:07:13.780979  148021 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 01:07:13.780989  148021 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 01:07:13.781000  148021 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 01:07:13.781006  148021 command_runner.go:130] > # 	"containers_oom_total",
	I1004 01:07:13.781013  148021 command_runner.go:130] > # 	"containers_oom",
	I1004 01:07:13.781024  148021 command_runner.go:130] > # 	"processes_defunct",
	I1004 01:07:13.781032  148021 command_runner.go:130] > # 	"operations_total",
	I1004 01:07:13.781043  148021 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 01:07:13.781055  148021 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 01:07:13.781065  148021 command_runner.go:130] > # 	"operations_errors_total",
	I1004 01:07:13.781075  148021 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 01:07:13.781085  148021 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 01:07:13.781090  148021 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 01:07:13.781099  148021 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 01:07:13.781107  148021 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 01:07:13.781118  148021 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 01:07:13.781125  148021 command_runner.go:130] > # ]
	I1004 01:07:13.781137  148021 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 01:07:13.781147  148021 command_runner.go:130] > # metrics_port = 9090
	I1004 01:07:13.781159  148021 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 01:07:13.781169  148021 command_runner.go:130] > # metrics_socket = ""
	I1004 01:07:13.781178  148021 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 01:07:13.781189  148021 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 01:07:13.781196  148021 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 01:07:13.781206  148021 command_runner.go:130] > # certificate on any modification event.
	I1004 01:07:13.781217  148021 command_runner.go:130] > # metrics_cert = ""
	I1004 01:07:13.781226  148021 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 01:07:13.781238  148021 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 01:07:13.781248  148021 command_runner.go:130] > # metrics_key = ""
	I1004 01:07:13.781258  148021 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 01:07:13.781267  148021 command_runner.go:130] > [crio.tracing]
	I1004 01:07:13.781277  148021 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 01:07:13.781286  148021 command_runner.go:130] > # enable_tracing = false
	I1004 01:07:13.781297  148021 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 01:07:13.781307  148021 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 01:07:13.781320  148021 command_runner.go:130] > # Number of samples to collect per million spans.
	I1004 01:07:13.781328  148021 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 01:07:13.781342  148021 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 01:07:13.781353  148021 command_runner.go:130] > [crio.stats]
	I1004 01:07:13.781366  148021 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 01:07:13.781378  148021 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 01:07:13.781388  148021 command_runner.go:130] > # stats_collection_period = 0
	I1004 01:07:13.781434  148021 command_runner.go:130] ! time="2023-10-04 01:07:13.758478144Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1004 01:07:13.781456  148021 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 01:07:13.781531  148021 cni.go:84] Creating CNI manager for ""
	I1004 01:07:13.781544  148021 cni.go:136] 2 nodes found, recommending kindnet
	I1004 01:07:13.781556  148021 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:07:13.781582  148021 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.181 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-038823 NodeName:multinode-038823-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:07:13.781733  148021 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-038823-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:07:13.781798  148021 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-038823-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:07:13.781877  148021 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:07:13.790667  148021 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	I1004 01:07:13.790852  148021 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.2': No such file or directory
	
	Initiating transfer...
	I1004 01:07:13.790914  148021 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.2
	I1004 01:07:13.799641  148021 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256
	I1004 01:07:13.799671  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubectl -> /var/lib/minikube/binaries/v1.28.2/kubectl
	I1004 01:07:13.799640  148021 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubeadm
	I1004 01:07:13.799746  148021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl
	I1004 01:07:13.799642  148021 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubelet
	I1004 01:07:13.804029  148021 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I1004 01:07:13.804131  148021 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubectl': No such file or directory
	I1004 01:07:13.804163  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubectl --> /var/lib/minikube/binaries/v1.28.2/kubectl (49864704 bytes)
	I1004 01:07:14.504651  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubeadm -> /var/lib/minikube/binaries/v1.28.2/kubeadm
	I1004 01:07:14.504740  148021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm
	I1004 01:07:14.510392  148021 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I1004 01:07:14.510434  148021 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubeadm': No such file or directory
	I1004 01:07:14.510459  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubeadm --> /var/lib/minikube/binaries/v1.28.2/kubeadm (50757632 bytes)
	I1004 01:07:15.051391  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:07:15.065771  148021 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubelet -> /var/lib/minikube/binaries/v1.28.2/kubelet
	I1004 01:07:15.065911  148021 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet
	I1004 01:07:15.070452  148021 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I1004 01:07:15.070499  148021 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.2/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.2/kubelet': No such file or directory
	I1004 01:07:15.070528  148021 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubelet --> /var/lib/minikube/binaries/v1.28.2/kubelet (110776320 bytes)
	I1004 01:07:15.642614  148021 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1004 01:07:15.652432  148021 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1004 01:07:15.670502  148021 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:07:15.687223  148021 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1004 01:07:15.690955  148021 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:07:15.703920  148021 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:07:15.704235  148021 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:07:15.704392  148021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:07:15.704437  148021 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:07:15.719183  148021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1004 01:07:15.719649  148021 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:07:15.720119  148021 main.go:141] libmachine: Using API Version  1
	I1004 01:07:15.720140  148021 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:07:15.720470  148021 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:07:15.720662  148021 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:07:15.720824  148021 start.go:304] JoinCluster: &{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:07:15.720922  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 01:07:15.720938  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:07:15.723910  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:07:15.724521  148021 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:07:15.724557  148021 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:07:15.724752  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:07:15.724978  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:07:15.725128  148021 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:07:15.725284  148021 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:07:15.903126  148021 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token smbvle.n9hjr7prulqkp07y --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:07:15.906219  148021 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:07:15.906267  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token smbvle.n9hjr7prulqkp07y --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-038823-m02"
	I1004 01:07:15.955219  148021 command_runner.go:130] > [preflight] Running pre-flight checks
	I1004 01:07:16.104328  148021 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1004 01:07:16.104376  148021 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1004 01:07:16.146329  148021 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:07:16.146366  148021 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:07:16.146374  148021 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1004 01:07:16.274070  148021 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1004 01:07:18.294831  148021 command_runner.go:130] > This node has joined the cluster:
	I1004 01:07:18.294866  148021 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1004 01:07:18.294877  148021 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1004 01:07:18.294887  148021 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1004 01:07:18.296507  148021 command_runner.go:130] ! W1004 01:07:15.944190     823 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1004 01:07:18.296543  148021 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:07:18.296582  148021 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token smbvle.n9hjr7prulqkp07y --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-038823-m02": (2.390291291s)
	I1004 01:07:18.296610  148021 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 01:07:18.570650  148021 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1004 01:07:18.570692  148021 start.go:306] JoinCluster complete in 2.84986643s
	I1004 01:07:18.570705  148021 cni.go:84] Creating CNI manager for ""
	I1004 01:07:18.570712  148021 cni.go:136] 2 nodes found, recommending kindnet
	I1004 01:07:18.570769  148021 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 01:07:18.576440  148021 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1004 01:07:18.576478  148021 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1004 01:07:18.576490  148021 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1004 01:07:18.576498  148021 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:07:18.576509  148021 command_runner.go:130] > Access: 2023-10-04 01:05:51.007018562 +0000
	I1004 01:07:18.576516  148021 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1004 01:07:18.576521  148021 command_runner.go:130] > Change: 2023-10-04 01:05:49.121018562 +0000
	I1004 01:07:18.576525  148021 command_runner.go:130] >  Birth: -
	I1004 01:07:18.576633  148021 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 01:07:18.576656  148021 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1004 01:07:18.596518  148021 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 01:07:18.936567  148021 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:07:18.942774  148021 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:07:18.945628  148021 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1004 01:07:18.958502  148021 command_runner.go:130] > daemonset.apps/kindnet configured
	I1004 01:07:18.961576  148021 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:07:18.961811  148021 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:07:18.962184  148021 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:07:18.962200  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:18.962212  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:18.962221  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:18.964707  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:18.964724  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:18.964730  148021 round_trippers.go:580]     Audit-Id: 7470555e-780c-4a9e-b67f-050b01d41df2
	I1004 01:07:18.964736  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:18.964741  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:18.964746  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:18.964751  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:18.964755  148021 round_trippers.go:580]     Content-Length: 291
	I1004 01:07:18.964761  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:18 GMT
	I1004 01:07:18.964784  148021 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"442","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1004 01:07:18.964874  148021 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-038823" context rescaled to 1 replicas
	I1004 01:07:18.964900  148021 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:07:18.966690  148021 out.go:177] * Verifying Kubernetes components...
	I1004 01:07:18.968081  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:07:18.981807  148021 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:07:18.982148  148021 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:07:18.982383  148021 node_ready.go:35] waiting up to 6m0s for node "multinode-038823-m02" to be "Ready" ...
	I1004 01:07:18.982447  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:18.982456  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:18.982463  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:18.982470  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:18.986195  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:18.986217  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:18.986227  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:18.986236  148021 round_trippers.go:580]     Content-Length: 3531
	I1004 01:07:18.986244  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:18 GMT
	I1004 01:07:18.986252  148021 round_trippers.go:580]     Audit-Id: 65319a3d-75d6-488b-94f0-5158d00921c9
	I1004 01:07:18.986261  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:18.986278  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:18.986291  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:18.986506  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"489","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1004 01:07:18.986846  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:18.986860  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:18.986870  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:18.986878  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:18.989497  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:18.989512  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:18.989518  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:18.989524  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:18.989529  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:18.989534  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:18.989539  148021 round_trippers.go:580]     Content-Length: 3531
	I1004 01:07:18.989545  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:18 GMT
	I1004 01:07:18.989553  148021 round_trippers.go:580]     Audit-Id: d34b1f28-4eff-419d-b118-62d628b92365
	I1004 01:07:18.989754  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"489","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1004 01:07:19.490827  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:19.490851  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:19.490859  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:19.490865  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:19.680470  148021 round_trippers.go:574] Response Status: 200 OK in 189 milliseconds
	I1004 01:07:19.680503  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:19.680513  148021 round_trippers.go:580]     Audit-Id: e696b739-0845-4b41-92b5-0916778e5825
	I1004 01:07:19.680520  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:19.680527  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:19.680534  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:19.680542  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:19.680551  148021 round_trippers.go:580]     Content-Length: 3531
	I1004 01:07:19.680564  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:19 GMT
	I1004 01:07:19.680662  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"489","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1004 01:07:19.991040  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:19.991070  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:19.991081  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:19.991090  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:19.993964  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:19.993997  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:19.994012  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:19.994023  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:19.994032  148021 round_trippers.go:580]     Content-Length: 3531
	I1004 01:07:19.994040  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:19 GMT
	I1004 01:07:19.994049  148021 round_trippers.go:580]     Audit-Id: 558d4ce4-626c-4387-a43b-8b529852a491
	I1004 01:07:19.994058  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:19.994070  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:19.994155  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"489","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1004 01:07:20.490657  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:20.490689  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:20.490701  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:20.490709  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:20.494422  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:20.494446  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:20.494454  148021 round_trippers.go:580]     Content-Length: 3531
	I1004 01:07:20.494459  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:20 GMT
	I1004 01:07:20.494466  148021 round_trippers.go:580]     Audit-Id: c5682c66-1892-44a7-9599-c4402776cdc8
	I1004 01:07:20.494474  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:20.494481  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:20.494489  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:20.494497  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:20.494708  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"489","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2507 chars]
	I1004 01:07:20.990844  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:20.990869  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:20.990878  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:20.990884  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:20.995053  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:07:20.995082  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:20.995093  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:20.995102  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:20.995112  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:20.995121  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:20.995134  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:20 GMT
	I1004 01:07:20.995147  148021 round_trippers.go:580]     Audit-Id: 90d1231d-2712-427f-823b-6657cf38f0aa
	I1004 01:07:20.995159  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:20.995304  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:20.995626  148021 node_ready.go:58] node "multinode-038823-m02" has status "Ready":"False"
	I1004 01:07:21.491105  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:21.491131  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:21.491141  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:21.491149  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:21.494719  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:21.494790  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:21.494808  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:21 GMT
	I1004 01:07:21.494823  148021 round_trippers.go:580]     Audit-Id: b862bcaf-fdb8-4912-bdb9-2e103be0e4f5
	I1004 01:07:21.494832  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:21.494840  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:21.494852  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:21.494859  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:21.494871  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:21.495067  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:21.991077  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:21.991100  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:21.991108  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:21.991114  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:21.994383  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:21.994409  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:21.994416  148021 round_trippers.go:580]     Audit-Id: 17480f92-312a-4be9-98c1-8a100c8d963a
	I1004 01:07:21.994421  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:21.994427  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:21.994432  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:21.994437  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:21.994442  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:21.994447  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:21 GMT
	I1004 01:07:21.994528  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:22.490273  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:22.490294  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:22.490302  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:22.490310  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:22.493428  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:22.493450  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:22.493459  148021 round_trippers.go:580]     Audit-Id: 9a1e3496-64ff-4e5d-af3f-87a6bad64a55
	I1004 01:07:22.493466  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:22.493473  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:22.493480  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:22.493488  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:22.493498  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:22.493511  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:22 GMT
	I1004 01:07:22.493614  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:22.990672  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:22.990700  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:22.990709  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:22.990715  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:22.993942  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:22.993966  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:22.993976  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:22 GMT
	I1004 01:07:22.993983  148021 round_trippers.go:580]     Audit-Id: ddca96f2-0195-4f85-9a85-849f35829758
	I1004 01:07:22.993990  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:22.993998  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:22.994005  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:22.994014  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:22.994023  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:22.994068  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:23.490624  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:23.490657  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:23.490668  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:23.490676  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:23.493309  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:23.493331  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:23.493340  148021 round_trippers.go:580]     Audit-Id: 7a60b193-1868-4856-950a-1e1236c4d725
	I1004 01:07:23.493347  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:23.493355  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:23.493362  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:23.493371  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:23.493381  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:23.493391  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:23 GMT
	I1004 01:07:23.493509  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:23.493765  148021 node_ready.go:58] node "multinode-038823-m02" has status "Ready":"False"
	I1004 01:07:23.991069  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:23.991097  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:23.991108  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:23.991116  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:23.994176  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:23.994221  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:23.994233  148021 round_trippers.go:580]     Audit-Id: e33b46fa-2bd6-4889-b173-cf7c402411d3
	I1004 01:07:23.994242  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:23.994251  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:23.994259  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:23.994267  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:23.994298  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:23.994312  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:23 GMT
	I1004 01:07:23.994468  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:24.490975  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:24.491002  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:24.491010  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:24.491016  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:24.493974  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:24.494002  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:24.494013  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:24.494021  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:24 GMT
	I1004 01:07:24.494038  148021 round_trippers.go:580]     Audit-Id: 41985479-4be6-44a8-89ac-632e5235e844
	I1004 01:07:24.494052  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:24.494060  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:24.494069  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:24.494077  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:24.494178  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:24.990743  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:24.990774  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:24.990787  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:24.990796  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:24.993809  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:24.993865  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:24.993879  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:24.993891  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:24.993900  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:24.993906  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:24 GMT
	I1004 01:07:24.993912  148021 round_trippers.go:580]     Audit-Id: 0c5bf0b1-8166-4981-967b-277e9efe8ca2
	I1004 01:07:24.993917  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:24.993923  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:24.994036  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:25.491086  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:25.491109  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:25.491118  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:25.491125  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:25.494389  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:25.494411  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:25.494419  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:25.494425  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:25 GMT
	I1004 01:07:25.494430  148021 round_trippers.go:580]     Audit-Id: 3ac224fb-f0b0-40bf-b276-f26918f813bf
	I1004 01:07:25.494436  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:25.494441  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:25.494446  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:25.494454  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:25.494529  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:25.494835  148021 node_ready.go:58] node "multinode-038823-m02" has status "Ready":"False"
	I1004 01:07:25.990846  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:25.990876  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:25.990888  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:25.990898  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:25.994590  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:25.994618  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:25.994626  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:25 GMT
	I1004 01:07:25.994631  148021 round_trippers.go:580]     Audit-Id: 25e30935-73bc-4b3f-8ac7-5bb817f22041
	I1004 01:07:25.994637  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:25.994642  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:25.994647  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:25.994656  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:25.994665  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:25.994788  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:26.490930  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:26.490954  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:26.490962  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:26.490968  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:26.494508  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:26.494531  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:26.494544  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:26 GMT
	I1004 01:07:26.494553  148021 round_trippers.go:580]     Audit-Id: 1c252e24-c5f9-4759-847f-f0c041d3f663
	I1004 01:07:26.494561  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:26.494570  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:26.494579  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:26.494587  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:26.494596  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:26.494676  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:26.990243  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:26.990275  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:26.990284  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:26.990290  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:26.993723  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:26.993747  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:26.993756  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:26.993765  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:26 GMT
	I1004 01:07:26.993780  148021 round_trippers.go:580]     Audit-Id: 8c015860-7008-4b82-ae5c-3e703700eadd
	I1004 01:07:26.993790  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:26.993798  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:26.993806  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:26.993813  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:26.993912  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:27.490958  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:27.490984  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:27.490997  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:27.491019  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:27.494326  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:27.494361  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:27.494373  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:27.494382  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:27.494391  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:27.494403  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:27.494419  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:27.494430  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:27 GMT
	I1004 01:07:27.494441  148021 round_trippers.go:580]     Audit-Id: 433f29ad-ef2b-4bd5-aadf-54bf8545803b
	I1004 01:07:27.494550  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:27.494872  148021 node_ready.go:58] node "multinode-038823-m02" has status "Ready":"False"
	I1004 01:07:27.991080  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:27.991103  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:27.991112  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:27.991118  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:27.994019  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:27.994048  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:27.994058  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:27.994083  148021 round_trippers.go:580]     Content-Length: 3640
	I1004 01:07:27.994092  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:27 GMT
	I1004 01:07:27.994104  148021 round_trippers.go:580]     Audit-Id: 26e36f57-2758-4ea2-ab56-8604d66a9863
	I1004 01:07:27.994110  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:27.994121  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:27.994129  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:27.994240  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"498","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2616 chars]
	I1004 01:07:28.490893  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:28.490918  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.490926  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.490932  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.493753  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:28.493777  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.493796  148021 round_trippers.go:580]     Audit-Id: 602013a9-e370-46b8-ab6c-bc1da2d28e7a
	I1004 01:07:28.493804  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.493809  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.493814  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.493819  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.493824  148021 round_trippers.go:580]     Content-Length: 3726
	I1004 01:07:28.493830  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.493944  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"521","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I1004 01:07:28.494244  148021 node_ready.go:49] node "multinode-038823-m02" has status "Ready":"True"
	I1004 01:07:28.494261  148021 node_ready.go:38] duration metric: took 9.511864018s waiting for node "multinode-038823-m02" to be "Ready" ...
	I1004 01:07:28.494270  148021 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:07:28.494330  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:07:28.494338  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.494345  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.494350  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.497815  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:28.497833  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.497847  148021 round_trippers.go:580]     Audit-Id: bf2e4f7e-6781-4c76-adc1-a5cf39d070af
	I1004 01:07:28.497853  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.497862  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.497869  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.497880  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.497888  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.498962  148021 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"522"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"438","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67372 chars]
	I1004 01:07:28.500945  148021 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.501015  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:07:28.501024  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.501031  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.501037  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.503156  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:28.503179  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.503189  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.503197  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.503203  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.503209  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.503215  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.503220  148021 round_trippers.go:580]     Audit-Id: 7645658c-1e1c-42f8-9996-eba8e1007e59
	I1004 01:07:28.503457  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"438","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1004 01:07:28.503858  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:28.503869  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.503876  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.503881  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.505778  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:07:28.505795  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.505805  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.505812  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.505827  148021 round_trippers.go:580]     Audit-Id: 7981ae7c-ee50-4f3e-be66-8aa382b7382b
	I1004 01:07:28.505836  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.505862  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.505870  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.506202  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:07:28.506470  148021 pod_ready.go:92] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:28.506482  148021 pod_ready.go:81] duration metric: took 5.5179ms waiting for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.506490  148021 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.506539  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-038823
	I1004 01:07:28.506546  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.506553  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.506558  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.508350  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:07:28.508364  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.508370  148021 round_trippers.go:580]     Audit-Id: 7ed1e54a-d8f9-49ab-8b0c-f470716a9fc7
	I1004 01:07:28.508375  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.508380  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.508387  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.508396  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.508408  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.508539  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"324","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1004 01:07:28.508845  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:28.508855  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.508862  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.508867  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.512866  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:28.512882  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.512889  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.512894  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.512899  148021 round_trippers.go:580]     Audit-Id: 30e66450-e060-445f-a151-306ddd3bf119
	I1004 01:07:28.512904  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.512909  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.512917  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.513266  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:07:28.513520  148021 pod_ready.go:92] pod "etcd-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:28.513532  148021 pod_ready.go:81] duration metric: took 7.036687ms waiting for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.513544  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.513592  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-038823
	I1004 01:07:28.513599  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.513606  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.513611  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.516241  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:28.516257  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.516263  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.516269  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.516273  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.516282  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.516287  148021 round_trippers.go:580]     Audit-Id: 52724dfb-ed1f-4ad5-8819-43c4e8fe5dd1
	I1004 01:07:28.516298  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.516755  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-038823","namespace":"kube-system","uid":"8f46d14f-fac3-4029-af40-ad242d6e93e1","resourceVersion":"323","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.mirror":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.seen":"2023-10-04T01:06:24.071714521Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1004 01:07:28.517082  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:28.517093  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.517100  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.517107  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.518762  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:07:28.518779  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.518787  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.518802  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.518816  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.518822  148021 round_trippers.go:580]     Audit-Id: 4d169288-7164-4fc5-b4e1-9012799a021d
	I1004 01:07:28.518829  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.518836  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.519158  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:07:28.519492  148021 pod_ready.go:92] pod "kube-apiserver-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:28.519508  148021 pod_ready.go:81] duration metric: took 5.954592ms waiting for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.519518  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.519573  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-038823
	I1004 01:07:28.519583  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.519596  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.519609  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.521627  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:28.521640  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.521646  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.521662  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.521674  148021 round_trippers.go:580]     Audit-Id: 96b26ef9-7b72-4e68-ac09-8216df955fe4
	I1004 01:07:28.521687  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.521699  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.521705  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.521984  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-038823","namespace":"kube-system","uid":"ace8ff54-191a-4969-bc58-ad0440f25084","resourceVersion":"298","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.mirror":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.seen":"2023-10-04T01:06:24.071715949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1004 01:07:28.522309  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:28.522320  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.522327  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.522334  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.524127  148021 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:07:28.524142  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.524147  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.524153  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.524158  148021 round_trippers.go:580]     Audit-Id: 018bbe51-b470-4b1f-b195-c85456985806
	I1004 01:07:28.524166  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.524172  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.524179  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.524306  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:07:28.524559  148021 pod_ready.go:92] pod "kube-controller-manager-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:28.524570  148021 pod_ready.go:81] duration metric: took 5.045236ms waiting for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.524578  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.690907  148021 request.go:629] Waited for 166.265075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:07:28.690991  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:07:28.690998  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.691007  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.691015  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.693954  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:28.693977  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.693984  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.693989  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.693994  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.693999  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.694004  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.694009  148021 round_trippers.go:580]     Audit-Id: 372b38d7-4f61-4ef1-b12f-0714a76a7233
	I1004 01:07:28.694674  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"505","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1004 01:07:28.891574  148021 request.go:629] Waited for 196.429816ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:28.891663  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:07:28.891668  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:28.891677  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:28.891683  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:28.894702  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:28.894722  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:28.894728  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:28.894733  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:28.894741  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:28.894747  148021 round_trippers.go:580]     Content-Length: 3726
	I1004 01:07:28.894752  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:28 GMT
	I1004 01:07:28.894757  148021 round_trippers.go:580]     Audit-Id: 9b2cc907-aa79-472c-8dd7-0f9d14ff81f4
	I1004 01:07:28.894762  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:28.894834  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"521","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2702 chars]
	I1004 01:07:28.895079  148021 pod_ready.go:92] pod "kube-proxy-hgg2z" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:28.895097  148021 pod_ready.go:81] duration metric: took 370.511606ms waiting for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:28.895111  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:29.091543  148021 request.go:629] Waited for 196.355593ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:07:29.091641  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:07:29.091653  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:29.091669  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:29.091680  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:29.096874  148021 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 01:07:29.096896  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:29.096903  148021 round_trippers.go:580]     Audit-Id: 97e90f82-40e2-416b-9b16-8ace3b075412
	I1004 01:07:29.096908  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:29.096924  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:29.096932  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:29.096940  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:29.096952  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:29 GMT
	I1004 01:07:29.097340  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pz9j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"36f00e2f-5611-43ae-94b5-d9dde6784128","resourceVersion":"408","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1004 01:07:29.291074  148021 request.go:629] Waited for 193.298065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:29.291141  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:29.291146  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:29.291154  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:29.291161  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:29.295248  148021 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:07:29.295273  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:29.295280  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:29 GMT
	I1004 01:07:29.295286  148021 round_trippers.go:580]     Audit-Id: fa5a6ce6-7525-4fa3-9e95-45a9cb9c13e7
	I1004 01:07:29.295303  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:29.295309  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:29.295317  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:29.295322  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:29.296188  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:07:29.296500  148021 pod_ready.go:92] pod "kube-proxy-pz9j4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:29.296513  148021 pod_ready.go:81] duration metric: took 401.395855ms waiting for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:29.296522  148021 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:29.491977  148021 request.go:629] Waited for 195.365736ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:07:29.492041  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:07:29.492046  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:29.492054  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:29.492060  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:29.495600  148021 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:07:29.495619  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:29.495625  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:29.495631  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:29 GMT
	I1004 01:07:29.495636  148021 round_trippers.go:580]     Audit-Id: ffc49601-21b6-45cf-a548-7df47505a3b6
	I1004 01:07:29.495641  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:29.495646  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:29.495654  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:29.496265  148021 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-038823","namespace":"kube-system","uid":"2da95c67-ae74-41db-a746-455fa043f9a7","resourceVersion":"301","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.mirror":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.seen":"2023-10-04T01:06:24.071717021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1004 01:07:29.690989  148021 request.go:629] Waited for 194.328256ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:29.691065  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:07:29.691072  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:29.691080  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:29.691090  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:29.693965  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:29.693984  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:29.693991  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:29.693996  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:29.694002  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:29.694007  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:29 GMT
	I1004 01:07:29.694014  148021 round_trippers.go:580]     Audit-Id: e66e8a16-2850-433c-8b76-24c06ba88738
	I1004 01:07:29.694020  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:29.694494  148021 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I1004 01:07:29.694798  148021 pod_ready.go:92] pod "kube-scheduler-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:07:29.694812  148021 pod_ready.go:81] duration metric: took 398.282491ms waiting for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:07:29.694824  148021 pod_ready.go:38] duration metric: took 1.200545044s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:07:29.694841  148021 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:07:29.694885  148021 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:07:29.709307  148021 system_svc.go:56] duration metric: took 14.447183ms WaitForService to wait for kubelet.
	I1004 01:07:29.709338  148021 kubeadm.go:581] duration metric: took 10.744416783s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:07:29.709364  148021 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:07:29.891853  148021 request.go:629] Waited for 182.403684ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1004 01:07:29.891947  148021 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1004 01:07:29.891965  148021 round_trippers.go:469] Request Headers:
	I1004 01:07:29.891975  148021 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:07:29.891981  148021 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:07:29.894940  148021 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:07:29.894960  148021 round_trippers.go:577] Response Headers:
	I1004 01:07:29.894967  148021 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:07:29.894975  148021 round_trippers.go:580]     Content-Type: application/json
	I1004 01:07:29.894982  148021 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:07:29.894991  148021 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:07:29.895002  148021 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:07:29 GMT
	I1004 01:07:29.895009  148021 round_trippers.go:580]     Audit-Id: dbf16fc4-1934-4e47-85c9-e01d9096548a
	I1004 01:07:29.895403  148021 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"523"},"items":[{"metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"418","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I1004 01:07:29.895810  148021 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:07:29.895826  148021 node_conditions.go:123] node cpu capacity is 2
	I1004 01:07:29.895836  148021 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:07:29.895840  148021 node_conditions.go:123] node cpu capacity is 2
	I1004 01:07:29.895845  148021 node_conditions.go:105] duration metric: took 186.475668ms to run NodePressure ...
	I1004 01:07:29.895855  148021 start.go:228] waiting for startup goroutines ...
	I1004 01:07:29.895881  148021 start.go:242] writing updated cluster config ...
	I1004 01:07:29.896182  148021 ssh_runner.go:195] Run: rm -f paused
	I1004 01:07:29.947559  148021 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:07:29.950463  148021 out.go:177] * Done! kubectl is now configured to use "multinode-038823" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:05:49 UTC, ends at Wed 2023-10-04 01:07:37 UTC. --
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.118128604Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381657118010510,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c657cbc2-ef46-46e7-8924-ab64ad07cce5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.118849684Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a77c0fc-a75f-4306-aee5-5839eb9da46b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.118914937Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a77c0fc-a75f-4306-aee5-5839eb9da46b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.119199701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e666677e5cc191842d5706600cb5b5921f8191ee49c9a309e60e2e940d3c2fb8,PodSandboxId:203663b1f25f55e14b743477e23e25d482d2e6becdd000406c372cd88f3094ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696381653148709931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479c3b27834af8dad3cfe92de0023f81e2886bd1253bbb798f9f81c5aafac83,PodSandboxId:55ec51b8b4734ba53201051d25784f6167cc901f2ebb7bdfc709e013a0bee72c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696381603145710142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6caac564bcda8b77cac18a077b023a84e3b4e05cb45a735cebc11062169319e,PodSandboxId:6a43086a12e24f0c9c7335c20020f8d3692ac86908911338ace97f073e6b3648,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381602877163613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f087174cb4dfb18a0f695260556073ae39c8ef1b0d1723e5657ecde621313,PodSandboxId:738beed60a216e05199d7875ffa4ad4194ddfe5dd9f41c2685e1fe985eba1ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696381600330544666,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c58b12ff05edcd9cc6f5ab42e521f8d7cbd8fd4dfaedac9a02bfde3ff6e88b4,PodSandboxId:4ab325853d98b287738fa40f989203ca05c3ec1804426914d60deec179a211be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696381597839410771,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde6
784128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23761d877d02584fd5c962635b65f8de23d9580007843dedec5c4a78a764f0b,PodSandboxId:e864e51c01956b365faa9a4562fa31e153302e03f784cbeb0ae4ae5eed7f7edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696381576723099543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569b80cb348d01ee953a3cc75094947468a947d46fce30fe6046a8178fc9b530,PodSandboxId:aaf3465b22e98b953f734582c65a4d2d06eecce3b68d5d5db2dee51a9db40930,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696381576644129658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.h
ash: 18868ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5199c6af89324d4e3ad138a0f270d4673e7aa4c4dc634ab3984089709310fa0,PodSandboxId:72c983e72d6fbcd910034c7cfccf505591c301a74b300bdaaf317c4d4bc55fff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696381576016595055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1,PodSandboxId:309529d963aa206a4a08646c5e78ab4a674f69c9d6cf31a74d174883c989c6f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696381576034135065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes
.container.hash: a2e1edd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a77c0fc-a75f-4306-aee5-5839eb9da46b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.164612349Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8a031bb9-b6e0-4dff-a225-2b125d3e3312 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.164680141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8a031bb9-b6e0-4dff-a225-2b125d3e3312 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.170469213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8d7555f4-d054-42c5-af40-b43238dfb2df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.172355514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381657172221129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8d7555f4-d054-42c5-af40-b43238dfb2df name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.173240367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2eee0895-98ab-4d33-8b01-1ec0b6c1638a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.173317815Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2eee0895-98ab-4d33-8b01-1ec0b6c1638a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.173581598Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e666677e5cc191842d5706600cb5b5921f8191ee49c9a309e60e2e940d3c2fb8,PodSandboxId:203663b1f25f55e14b743477e23e25d482d2e6becdd000406c372cd88f3094ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696381653148709931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479c3b27834af8dad3cfe92de0023f81e2886bd1253bbb798f9f81c5aafac83,PodSandboxId:55ec51b8b4734ba53201051d25784f6167cc901f2ebb7bdfc709e013a0bee72c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696381603145710142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6caac564bcda8b77cac18a077b023a84e3b4e05cb45a735cebc11062169319e,PodSandboxId:6a43086a12e24f0c9c7335c20020f8d3692ac86908911338ace97f073e6b3648,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381602877163613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f087174cb4dfb18a0f695260556073ae39c8ef1b0d1723e5657ecde621313,PodSandboxId:738beed60a216e05199d7875ffa4ad4194ddfe5dd9f41c2685e1fe985eba1ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696381600330544666,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c58b12ff05edcd9cc6f5ab42e521f8d7cbd8fd4dfaedac9a02bfde3ff6e88b4,PodSandboxId:4ab325853d98b287738fa40f989203ca05c3ec1804426914d60deec179a211be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696381597839410771,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde6
784128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23761d877d02584fd5c962635b65f8de23d9580007843dedec5c4a78a764f0b,PodSandboxId:e864e51c01956b365faa9a4562fa31e153302e03f784cbeb0ae4ae5eed7f7edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696381576723099543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569b80cb348d01ee953a3cc75094947468a947d46fce30fe6046a8178fc9b530,PodSandboxId:aaf3465b22e98b953f734582c65a4d2d06eecce3b68d5d5db2dee51a9db40930,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696381576644129658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.h
ash: 18868ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5199c6af89324d4e3ad138a0f270d4673e7aa4c4dc634ab3984089709310fa0,PodSandboxId:72c983e72d6fbcd910034c7cfccf505591c301a74b300bdaaf317c4d4bc55fff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696381576016595055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1,PodSandboxId:309529d963aa206a4a08646c5e78ab4a674f69c9d6cf31a74d174883c989c6f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696381576034135065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes
.container.hash: a2e1edd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2eee0895-98ab-4d33-8b01-1ec0b6c1638a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.218734828Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1a521892-1441-4c93-8a6c-e15f89dca6d6 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.218817639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1a521892-1441-4c93-8a6c-e15f89dca6d6 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.221001967Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=aad503d1-0fea-4ba5-bf86-4fd363a0b482 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.221506740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381657221492912,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=aad503d1-0fea-4ba5-bf86-4fd363a0b482 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.221982068Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1399d423-c3a2-4461-a3b9-d86d4bf36d9d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.222139823Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1399d423-c3a2-4461-a3b9-d86d4bf36d9d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.222379842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e666677e5cc191842d5706600cb5b5921f8191ee49c9a309e60e2e940d3c2fb8,PodSandboxId:203663b1f25f55e14b743477e23e25d482d2e6becdd000406c372cd88f3094ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696381653148709931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479c3b27834af8dad3cfe92de0023f81e2886bd1253bbb798f9f81c5aafac83,PodSandboxId:55ec51b8b4734ba53201051d25784f6167cc901f2ebb7bdfc709e013a0bee72c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696381603145710142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6caac564bcda8b77cac18a077b023a84e3b4e05cb45a735cebc11062169319e,PodSandboxId:6a43086a12e24f0c9c7335c20020f8d3692ac86908911338ace97f073e6b3648,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381602877163613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f087174cb4dfb18a0f695260556073ae39c8ef1b0d1723e5657ecde621313,PodSandboxId:738beed60a216e05199d7875ffa4ad4194ddfe5dd9f41c2685e1fe985eba1ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696381600330544666,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c58b12ff05edcd9cc6f5ab42e521f8d7cbd8fd4dfaedac9a02bfde3ff6e88b4,PodSandboxId:4ab325853d98b287738fa40f989203ca05c3ec1804426914d60deec179a211be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696381597839410771,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde6
784128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23761d877d02584fd5c962635b65f8de23d9580007843dedec5c4a78a764f0b,PodSandboxId:e864e51c01956b365faa9a4562fa31e153302e03f784cbeb0ae4ae5eed7f7edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696381576723099543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569b80cb348d01ee953a3cc75094947468a947d46fce30fe6046a8178fc9b530,PodSandboxId:aaf3465b22e98b953f734582c65a4d2d06eecce3b68d5d5db2dee51a9db40930,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696381576644129658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.h
ash: 18868ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5199c6af89324d4e3ad138a0f270d4673e7aa4c4dc634ab3984089709310fa0,PodSandboxId:72c983e72d6fbcd910034c7cfccf505591c301a74b300bdaaf317c4d4bc55fff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696381576016595055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1,PodSandboxId:309529d963aa206a4a08646c5e78ab4a674f69c9d6cf31a74d174883c989c6f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696381576034135065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes
.container.hash: a2e1edd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1399d423-c3a2-4461-a3b9-d86d4bf36d9d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.265911364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9f3a3b4f-7660-48b7-8945-511b0cb8356c name=/runtime.v1.RuntimeService/Version
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.266001945Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9f3a3b4f-7660-48b7-8945-511b0cb8356c name=/runtime.v1.RuntimeService/Version
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.267401252Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8051b65c-5268-43cd-945b-cbc0847539c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.267804090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696381657267789840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8051b65c-5268-43cd-945b-cbc0847539c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.268407001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ec5020bf-2f24-4554-aad2-8cea7255adcd name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.268485531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ec5020bf-2f24-4554-aad2-8cea7255adcd name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:07:37 multinode-038823 crio[717]: time="2023-10-04 01:07:37.268711675Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e666677e5cc191842d5706600cb5b5921f8191ee49c9a309e60e2e940d3c2fb8,PodSandboxId:203663b1f25f55e14b743477e23e25d482d2e6becdd000406c372cd88f3094ea,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696381653148709931,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9479c3b27834af8dad3cfe92de0023f81e2886bd1253bbb798f9f81c5aafac83,PodSandboxId:55ec51b8b4734ba53201051d25784f6167cc901f2ebb7bdfc709e013a0bee72c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696381603145710142,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6caac564bcda8b77cac18a077b023a84e3b4e05cb45a735cebc11062169319e,PodSandboxId:6a43086a12e24f0c9c7335c20020f8d3692ac86908911338ace97f073e6b3648,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696381602877163613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f087174cb4dfb18a0f695260556073ae39c8ef1b0d1723e5657ecde621313,PodSandboxId:738beed60a216e05199d7875ffa4ad4194ddfe5dd9f41c2685e1fe985eba1ba2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696381600330544666,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c58b12ff05edcd9cc6f5ab42e521f8d7cbd8fd4dfaedac9a02bfde3ff6e88b4,PodSandboxId:4ab325853d98b287738fa40f989203ca05c3ec1804426914d60deec179a211be,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696381597839410771,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde6
784128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f23761d877d02584fd5c962635b65f8de23d9580007843dedec5c4a78a764f0b,PodSandboxId:e864e51c01956b365faa9a4562fa31e153302e03f784cbeb0ae4ae5eed7f7edc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696381576723099543,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:569b80cb348d01ee953a3cc75094947468a947d46fce30fe6046a8178fc9b530,PodSandboxId:aaf3465b22e98b953f734582c65a4d2d06eecce3b68d5d5db2dee51a9db40930,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696381576644129658,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.h
ash: 18868ac4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5199c6af89324d4e3ad138a0f270d4673e7aa4c4dc634ab3984089709310fa0,PodSandboxId:72c983e72d6fbcd910034c7cfccf505591c301a74b300bdaaf317c4d4bc55fff,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696381576016595055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1,PodSandboxId:309529d963aa206a4a08646c5e78ab4a674f69c9d6cf31a74d174883c989c6f9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696381576034135065,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes
.container.hash: a2e1edd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ec5020bf-2f24-4554-aad2-8cea7255adcd name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e666677e5cc19       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   203663b1f25f5       busybox-5bc68d56bd-ckxb4
	9479c3b27834a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      54 seconds ago       Running             coredns                   0                   55ec51b8b4734       coredns-5dd5756b68-xbln6
	b6caac564bcda       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      54 seconds ago       Running             storage-provisioner       0                   6a43086a12e24       storage-provisioner
	122f087174cb4       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      57 seconds ago       Running             kindnet-cni               0                   738beed60a216       kindnet-prsst
	1c58b12ff05ed       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      59 seconds ago       Running             kube-proxy                0                   4ab325853d98b       kube-proxy-pz9j4
	f23761d877d02       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      About a minute ago   Running             kube-scheduler            0                   e864e51c01956       kube-scheduler-multinode-038823
	569b80cb348d0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   aaf3465b22e98       etcd-multinode-038823
	06000bf34eaae       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      About a minute ago   Running             kube-apiserver            0                   309529d963aa2       kube-apiserver-multinode-038823
	c5199c6af8932       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      About a minute ago   Running             kube-controller-manager   0                   72c983e72d6fb       kube-controller-manager-multinode-038823
	
	* 
	* ==> coredns [9479c3b27834af8dad3cfe92de0023f81e2886bd1253bbb798f9f81c5aafac83] <==
	* [INFO] 10.244.1.2:40084 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165989s
	[INFO] 10.244.0.3:53278 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137537s
	[INFO] 10.244.0.3:33551 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001966591s
	[INFO] 10.244.0.3:53489 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072878s
	[INFO] 10.244.0.3:38730 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000070553s
	[INFO] 10.244.0.3:39065 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001329954s
	[INFO] 10.244.0.3:59417 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000035293s
	[INFO] 10.244.0.3:41303 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058201s
	[INFO] 10.244.0.3:52160 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115642s
	[INFO] 10.244.1.2:39210 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000263801s
	[INFO] 10.244.1.2:40275 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113955s
	[INFO] 10.244.1.2:33729 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010844s
	[INFO] 10.244.1.2:49090 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075072s
	[INFO] 10.244.0.3:41503 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131222s
	[INFO] 10.244.0.3:47770 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098539s
	[INFO] 10.244.0.3:38125 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197994s
	[INFO] 10.244.0.3:37996 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094683s
	[INFO] 10.244.1.2:58362 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177643s
	[INFO] 10.244.1.2:46404 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00030466s
	[INFO] 10.244.1.2:48774 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000209155s
	[INFO] 10.244.1.2:55909 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000241146s
	[INFO] 10.244.0.3:40851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110488s
	[INFO] 10.244.0.3:59064 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000079074s
	[INFO] 10.244.0.3:41030 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000176515s
	[INFO] 10.244.0.3:53688 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00007324s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-038823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-038823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=multinode-038823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_06_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:06:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-038823
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:07:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:06:42 +0000   Wed, 04 Oct 2023 01:06:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:06:42 +0000   Wed, 04 Oct 2023 01:06:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:06:42 +0000   Wed, 04 Oct 2023 01:06:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:06:42 +0000   Wed, 04 Oct 2023 01:06:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    multinode-038823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ba921e5b974ee499fdce1a4921eb7b
	  System UUID:                09ba921e-5b97-4ee4-99fd-ce1a4921eb7b
	  Boot ID:                    066b8599-ecf8-4af6-807b-52119d8339ad
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ckxb4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-5dd5756b68-xbln6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-multinode-038823                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-prsst                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      61s
	  kube-system                 kube-apiserver-multinode-038823             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-multinode-038823    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-pz9j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-multinode-038823             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-038823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-038823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 83s)  kubelet          Node multinode-038823 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node multinode-038823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node multinode-038823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s                kubelet          Node multinode-038823 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                node-controller  Node multinode-038823 event: Registered Node multinode-038823 in Controller
	  Normal  NodeReady                55s                kubelet          Node multinode-038823 status is now: NodeReady
	
	
	Name:               multinode-038823-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-038823-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:07:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-038823-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:07:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:07:28 +0000   Wed, 04 Oct 2023 01:07:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:07:28 +0000   Wed, 04 Oct 2023 01:07:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:07:28 +0000   Wed, 04 Oct 2023 01:07:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:07:28 +0000   Wed, 04 Oct 2023 01:07:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    multinode-038823-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d98153902ad4974b99e7d140dce28b7
	  System UUID:                6d981539-02ad-4974-b99e-7d140dce28b7
	  Boot ID:                    2bec2f70-3e73-4318-99e6-705a0876f3f6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-8g74z    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-cqczw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19s
	  kube-system                 kube-proxy-hgg2z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientMemory  19s (x5 over 21s)  kubelet          Node multinode-038823-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x5 over 21s)  kubelet          Node multinode-038823-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x5 over 21s)  kubelet          Node multinode-038823-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17s                node-controller  Node multinode-038823-m02 event: Registered Node multinode-038823-m02 in Controller
	  Normal  NodeReady                9s                 kubelet          Node multinode-038823-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:05] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072021] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.367457] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.518953] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.133135] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.020240] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct 4 01:06] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.108141] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.130113] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.107544] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.206434] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[ +10.209667] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +9.283035] systemd-fstab-generator[1263]: Ignoring "noauto" for root device
	[ +20.596959] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [569b80cb348d01ee953a3cc75094947468a947d46fce30fe6046a8178fc9b530] <==
	* {"level":"info","ts":"2023-10-04T01:06:18.335635Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-10-04T01:06:18.335737Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:06:18.335843Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:06:18.609853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-04T01:06:18.60997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-04T01:06:18.610004Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgPreVoteResp from eed9c28654b6490f at term 1"}
	{"level":"info","ts":"2023-10-04T01:06:18.610139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:06:18.610169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgVoteResp from eed9c28654b6490f at term 2"}
	{"level":"info","ts":"2023-10-04T01:06:18.610275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became leader at term 2"}
	{"level":"info","ts":"2023-10-04T01:06:18.610304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eed9c28654b6490f elected leader eed9c28654b6490f at term 2"}
	{"level":"info","ts":"2023-10-04T01:06:18.61243Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:06:18.612786Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eed9c28654b6490f","local-member-attributes":"{Name:multinode-038823 ClientURLs:[https://192.168.39.212:2379]}","request-path":"/0/members/eed9c28654b6490f/attributes","cluster-id":"f8d3b95e5bbb719c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:06:18.612984Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:06:18.613007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:06:18.614365Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:06:18.613145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:06:18.614653Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:06:18.614792Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:06:18.614834Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:06:18.614933Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T01:06:18.618944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.212:2379"}
	{"level":"info","ts":"2023-10-04T01:06:37.066774Z","caller":"traceutil/trace.go:171","msg":"trace[1997042461] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"100.687979ms","start":"2023-10-04T01:06:36.966071Z","end":"2023-10-04T01:06:37.066759Z","steps":["trace[1997042461] 'process raft request'  (duration: 89.041187ms)","trace[1997042461] 'compare'  (duration: 11.20795ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T01:07:19.394377Z","caller":"traceutil/trace.go:171","msg":"trace[32122566] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"111.489151ms","start":"2023-10-04T01:07:19.282838Z","end":"2023-10-04T01:07:19.394327Z","steps":["trace[32122566] 'process raft request'  (duration: 111.253113ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:07:19.674952Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.400266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-038823-m02\" ","response":"range_response_count:1 size:2460"}
	{"level":"info","ts":"2023-10-04T01:07:19.675106Z","caller":"traceutil/trace.go:171","msg":"trace[990284857] range","detail":"{range_begin:/registry/minions/multinode-038823-m02; range_end:; response_count:1; response_revision:495; }","duration":"186.688425ms","start":"2023-10-04T01:07:19.488406Z","end":"2023-10-04T01:07:19.675095Z","steps":["trace[990284857] 'range keys from in-memory index tree'  (duration: 186.313134ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:07:37 up 1 min,  0 users,  load average: 0.73, 0.39, 0.15
	Linux multinode-038823 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [122f087174cb4dfb18a0f695260556073ae39c8ef1b0d1723e5657ecde621313] <==
	* I1004 01:06:41.050366       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1004 01:06:41.050624       1 main.go:107] hostIP = 192.168.39.212
	podIP = 192.168.39.212
	I1004 01:06:41.051186       1 main.go:116] setting mtu 1500 for CNI 
	I1004 01:06:41.051304       1 main.go:146] kindnetd IP family: "ipv4"
	I1004 01:06:41.051329       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1004 01:06:41.751998       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:06:41.752113       1 main.go:227] handling current node
	I1004 01:06:51.767411       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:06:51.767509       1 main.go:227] handling current node
	I1004 01:07:01.780877       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:07:01.780952       1 main.go:227] handling current node
	I1004 01:07:11.786338       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:07:11.786401       1 main.go:227] handling current node
	I1004 01:07:21.800107       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:07:21.800205       1 main.go:227] handling current node
	I1004 01:07:21.800236       1 main.go:223] Handling node with IPs: map[192.168.39.181:{}]
	I1004 01:07:21.800246       1 main.go:250] Node multinode-038823-m02 has CIDR [10.244.1.0/24] 
	I1004 01:07:21.800648       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.181 Flags: [] Table: 0} 
	I1004 01:07:31.806313       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:07:31.806426       1 main.go:227] handling current node
	I1004 01:07:31.806459       1 main.go:223] Handling node with IPs: map[192.168.39.181:{}]
	I1004 01:07:31.806481       1 main.go:250] Node multinode-038823-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1] <==
	* I1004 01:06:20.599200       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1004 01:06:20.614952       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 01:06:20.616226       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1004 01:06:20.616279       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1004 01:06:20.617284       1 shared_informer.go:318] Caches are synced for configmaps
	I1004 01:06:20.620464       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1004 01:06:20.621570       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 01:06:20.627607       1 controller.go:624] quota admission added evaluator for: namespaces
	E1004 01:06:20.630971       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1004 01:06:20.834991       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 01:06:21.425438       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1004 01:06:21.430420       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1004 01:06:21.430457       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 01:06:22.107003       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 01:06:22.160938       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 01:06:22.239649       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1004 01:06:22.247637       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.212]
	I1004 01:06:22.248854       1 controller.go:624] quota admission added evaluator for: endpoints
	I1004 01:06:22.253663       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1004 01:06:22.497079       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1004 01:06:23.923225       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1004 01:06:23.941188       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1004 01:06:23.961510       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1004 01:06:36.407495       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1004 01:06:36.489075       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [c5199c6af89324d4e3ad138a0f270d4673e7aa4c4dc634ab3984089709310fa0] <==
	* I1004 01:06:37.292360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.616µs"
	I1004 01:06:42.086190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="185.835µs"
	I1004 01:06:42.106774       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.928µs"
	I1004 01:06:44.320890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.947µs"
	I1004 01:06:44.374211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.717342ms"
	I1004 01:06:44.374451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.079µs"
	I1004 01:06:45.578432       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1004 01:07:18.224114       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-038823-m02\" does not exist"
	I1004 01:07:18.250260       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-038823-m02" podCIDRs=["10.244.1.0/24"]
	I1004 01:07:18.255852       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hgg2z"
	I1004 01:07:18.255980       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-cqczw"
	I1004 01:07:20.585985       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-038823-m02"
	I1004 01:07:20.586225       1 event.go:307] "Event occurred" object="multinode-038823-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-038823-m02 event: Registered Node multinode-038823-m02 in Controller"
	I1004 01:07:28.345698       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m02"
	I1004 01:07:30.631905       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1004 01:07:30.665579       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-8g74z"
	I1004 01:07:30.680683       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-ckxb4"
	I1004 01:07:30.693302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="61.761506ms"
	I1004 01:07:30.722841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.697009ms"
	I1004 01:07:30.723272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="278.407µs"
	I1004 01:07:30.746578       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.586µs"
	I1004 01:07:33.499766       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.365762ms"
	I1004 01:07:33.499913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="106.298µs"
	I1004 01:07:33.879420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.019853ms"
	I1004 01:07:33.879711       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="81.904µs"
	
	* 
	* ==> kube-proxy [1c58b12ff05edcd9cc6f5ab42e521f8d7cbd8fd4dfaedac9a02bfde3ff6e88b4] <==
	* I1004 01:06:38.015314       1 server_others.go:69] "Using iptables proxy"
	I1004 01:06:38.025245       1 node.go:141] Successfully retrieved node IP: 192.168.39.212
	I1004 01:06:38.082421       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:06:38.082515       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:06:38.085215       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:06:38.085275       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:06:38.085444       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:06:38.085480       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:06:38.086205       1 config.go:188] "Starting service config controller"
	I1004 01:06:38.086253       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:06:38.086274       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:06:38.086279       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:06:38.088660       1 config.go:315] "Starting node config controller"
	I1004 01:06:38.088695       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:06:38.186969       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:06:38.187098       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:06:38.189592       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [f23761d877d02584fd5c962635b65f8de23d9580007843dedec5c4a78a764f0b] <==
	* W1004 01:06:21.459298       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:06:21.459357       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:06:21.482294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:06:21.482461       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 01:06:21.495514       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:06:21.495610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 01:06:21.499962       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 01:06:21.500104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 01:06:21.529922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:06:21.530116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 01:06:21.610578       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 01:06:21.610682       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 01:06:21.668100       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:06:21.668153       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 01:06:21.749872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:06:21.750011       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 01:06:21.759008       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:06:21.759180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 01:06:21.766228       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:06:21.766278       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 01:06:21.781135       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:06:21.781271       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 01:06:21.909895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 01:06:21.910128       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I1004 01:06:24.448216       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:05:49 UTC, ends at Wed 2023-10-04 01:07:37 UTC. --
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.792557    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1775280f-c3e2-4162-9287-9b58a90c8f83-lib-modules\") pod \"kindnet-prsst\" (UID: \"1775280f-c3e2-4162-9287-9b58a90c8f83\") " pod="kube-system/kindnet-prsst"
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.792926    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f00e2f-5611-43ae-94b5-d9dde6784128-lib-modules\") pod \"kube-proxy-pz9j4\" (UID: \"36f00e2f-5611-43ae-94b5-d9dde6784128\") " pod="kube-system/kube-proxy-pz9j4"
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.801338    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1775280f-c3e2-4162-9287-9b58a90c8f83-cni-cfg\") pod \"kindnet-prsst\" (UID: \"1775280f-c3e2-4162-9287-9b58a90c8f83\") " pod="kube-system/kindnet-prsst"
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.801935    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36f00e2f-5611-43ae-94b5-d9dde6784128-kube-proxy\") pod \"kube-proxy-pz9j4\" (UID: \"36f00e2f-5611-43ae-94b5-d9dde6784128\") " pod="kube-system/kube-proxy-pz9j4"
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.802643    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f00e2f-5611-43ae-94b5-d9dde6784128-xtables-lock\") pod \"kube-proxy-pz9j4\" (UID: \"36f00e2f-5611-43ae-94b5-d9dde6784128\") " pod="kube-system/kube-proxy-pz9j4"
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.802932    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1775280f-c3e2-4162-9287-9b58a90c8f83-xtables-lock\") pod \"kindnet-prsst\" (UID: \"1775280f-c3e2-4162-9287-9b58a90c8f83\") " pod="kube-system/kindnet-prsst"
	Oct 04 01:06:36 multinode-038823 kubelet[1273]: I1004 01:06:36.803477    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wllvz\" (UniqueName: \"kubernetes.io/projected/1775280f-c3e2-4162-9287-9b58a90c8f83-kube-api-access-wllvz\") pod \"kindnet-prsst\" (UID: \"1775280f-c3e2-4162-9287-9b58a90c8f83\") " pod="kube-system/kindnet-prsst"
	Oct 04 01:06:41 multinode-038823 kubelet[1273]: I1004 01:06:41.274888    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pz9j4" podStartSLOduration=5.274849616 podCreationTimestamp="2023-10-04 01:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 01:06:38.268835739 +0000 UTC m=+14.362511048" watchObservedRunningTime="2023-10-04 01:06:41.274849616 +0000 UTC m=+17.368524923"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.040615    1273 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.080855    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-prsst" podStartSLOduration=6.080815927 podCreationTimestamp="2023-10-04 01:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 01:06:41.276474474 +0000 UTC m=+17.370149783" watchObservedRunningTime="2023-10-04 01:06:42.080815927 +0000 UTC m=+18.174491236"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.081098    1273 topology_manager.go:215] "Topology Admit Handler" podUID="956d98ac-25cb-4d19-a9c7-c3a9682eff67" podNamespace="kube-system" podName="coredns-5dd5756b68-xbln6"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.085439    1273 topology_manager.go:215] "Topology Admit Handler" podUID="b4bd2f00-0b17-47da-add0-486f8232ea80" podNamespace="kube-system" podName="storage-provisioner"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.143367    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vtv\" (UniqueName: \"kubernetes.io/projected/956d98ac-25cb-4d19-a9c7-c3a9682eff67-kube-api-access-r8vtv\") pod \"coredns-5dd5756b68-xbln6\" (UID: \"956d98ac-25cb-4d19-a9c7-c3a9682eff67\") " pod="kube-system/coredns-5dd5756b68-xbln6"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.143442    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/956d98ac-25cb-4d19-a9c7-c3a9682eff67-config-volume\") pod \"coredns-5dd5756b68-xbln6\" (UID: \"956d98ac-25cb-4d19-a9c7-c3a9682eff67\") " pod="kube-system/coredns-5dd5756b68-xbln6"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.143471    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b4bd2f00-0b17-47da-add0-486f8232ea80-tmp\") pod \"storage-provisioner\" (UID: \"b4bd2f00-0b17-47da-add0-486f8232ea80\") " pod="kube-system/storage-provisioner"
	Oct 04 01:06:42 multinode-038823 kubelet[1273]: I1004 01:06:42.143490    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td4fk\" (UniqueName: \"kubernetes.io/projected/b4bd2f00-0b17-47da-add0-486f8232ea80-kube-api-access-td4fk\") pod \"storage-provisioner\" (UID: \"b4bd2f00-0b17-47da-add0-486f8232ea80\") " pod="kube-system/storage-provisioner"
	Oct 04 01:06:44 multinode-038823 kubelet[1273]: I1004 01:06:44.324710    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.324643718 podCreationTimestamp="2023-10-04 01:06:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 01:06:43.295444346 +0000 UTC m=+19.389119655" watchObservedRunningTime="2023-10-04 01:06:44.324643718 +0000 UTC m=+20.418319057"
	Oct 04 01:06:44 multinode-038823 kubelet[1273]: I1004 01:06:44.353557    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xbln6" podStartSLOduration=8.353521347 podCreationTimestamp="2023-10-04 01:06:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-04 01:06:44.325175632 +0000 UTC m=+20.418850941" watchObservedRunningTime="2023-10-04 01:06:44.353521347 +0000 UTC m=+20.447196656"
	Oct 04 01:07:24 multinode-038823 kubelet[1273]: E1004 01:07:24.157115    1273 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 01:07:24 multinode-038823 kubelet[1273]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 01:07:24 multinode-038823 kubelet[1273]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 01:07:24 multinode-038823 kubelet[1273]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 01:07:30 multinode-038823 kubelet[1273]: I1004 01:07:30.703673    1273 topology_manager.go:215] "Topology Admit Handler" podUID="0a2cc02b-be6a-4874-be28-422aa6bcbd21" podNamespace="default" podName="busybox-5bc68d56bd-ckxb4"
	Oct 04 01:07:30 multinode-038823 kubelet[1273]: I1004 01:07:30.728702    1273 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9488z\" (UniqueName: \"kubernetes.io/projected/0a2cc02b-be6a-4874-be28-422aa6bcbd21-kube-api-access-9488z\") pod \"busybox-5bc68d56bd-ckxb4\" (UID: \"0a2cc02b-be6a-4874-be28-422aa6bcbd21\") " pod="default/busybox-5bc68d56bd-ckxb4"
	Oct 04 01:07:33 multinode-038823 kubelet[1273]: I1004 01:07:33.489945    1273 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-ckxb4" podStartSLOduration=1.963984243 podCreationTimestamp="2023-10-04 01:07:30 +0000 UTC" firstStartedPulling="2023-10-04 01:07:31.599910548 +0000 UTC m=+67.693585837" lastFinishedPulling="2023-10-04 01:07:33.125777631 +0000 UTC m=+69.219452922" observedRunningTime="2023-10-04 01:07:33.48884482 +0000 UTC m=+69.582520109" watchObservedRunningTime="2023-10-04 01:07:33.489851328 +0000 UTC m=+69.583526637"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-038823 -n multinode-038823
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-038823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (688.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-038823
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-038823
E1004 01:10:33.291237  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-038823: exit status 82 (2m1.654949643s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-038823"  ...
	* Stopping node "multinode-038823"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-038823" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-038823 --wait=true -v=8 --alsologtostderr
E1004 01:11:05.194827  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:12:28.242796  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:13:15.375331  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:15:33.292065  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:16:05.194332  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:16:56.336556  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:18:15.375657  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:19:38.425318  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-038823 --wait=true -v=8 --alsologtostderr: (9m24.006000933s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-038823
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-038823 -n multinode-038823
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-038823 logs -n 25: (1.772082948s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823-m02.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823:/home/docker/cp-test_multinode-038823-m02_multinode-038823.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823 sudo cat                                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m02_multinode-038823.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03:/home/docker/cp-test_multinode-038823-m02_multinode-038823-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m03 sudo cat                                   | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m02_multinode-038823-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp testdata/cp-test.txt                                                | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823-m03.txt           |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823:/home/docker/cp-test_multinode-038823-m03_multinode-038823.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823 sudo cat                                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m03_multinode-038823.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02:/home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m02 sudo cat                                   | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-038823 node stop m03                                                          | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	| node    | multinode-038823 node start                                                             | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:09 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:09 UTC |                     |
	| stop    | -p multinode-038823                                                                     | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:09 UTC |                     |
	| start   | -p multinode-038823                                                                     | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:11 UTC | 04 Oct 23 01:20 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823 | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:11:04
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:11:04.708590  151348 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:11:04.708718  151348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:11:04.708730  151348 out.go:309] Setting ErrFile to fd 2...
	I1004 01:11:04.708735  151348 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:11:04.708950  151348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:11:04.709589  151348 out.go:303] Setting JSON to false
	I1004 01:11:04.710618  151348 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":6816,"bootTime":1696375049,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:11:04.710678  151348 start.go:138] virtualization: kvm guest
	I1004 01:11:04.713136  151348 out.go:177] * [multinode-038823] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:11:04.714995  151348 notify.go:220] Checking for updates...
	I1004 01:11:04.715007  151348 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:11:04.716766  151348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:11:04.718176  151348 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:11:04.719484  151348 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:11:04.720816  151348 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:11:04.723178  151348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:11:04.725241  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:11:04.725357  151348 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:11:04.725918  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:11:04.725970  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:11:04.741411  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46629
	I1004 01:11:04.741895  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:11:04.742428  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:11:04.742453  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:11:04.742941  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:11:04.743173  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:11:04.781208  151348 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:11:04.782589  151348 start.go:298] selected driver: kvm2
	I1004 01:11:04.782608  151348 start.go:902] validating driver "kvm2" against &{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:11:04.782754  151348 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:11:04.783058  151348 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:11:04.783133  151348 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:11:04.798903  151348 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:11:04.799597  151348 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:11:04.799644  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:11:04.799657  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:11:04.799670  151348 start_flags.go:321] config:
	{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:11:04.799906  151348 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:11:04.801957  151348 out.go:177] * Starting control plane node multinode-038823 in cluster multinode-038823
	I1004 01:11:04.803588  151348 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:11:04.803629  151348 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:11:04.803643  151348 cache.go:57] Caching tarball of preloaded images
	I1004 01:11:04.803746  151348 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:11:04.803758  151348 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:11:04.803869  151348 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:11:04.804058  151348 start.go:365] acquiring machines lock for multinode-038823: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:11:04.804100  151348 start.go:369] acquired machines lock for "multinode-038823" in 23.411µs
	I1004 01:11:04.804113  151348 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:11:04.804121  151348 fix.go:54] fixHost starting: 
	I1004 01:11:04.804373  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:11:04.804423  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:11:04.818867  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42553
	I1004 01:11:04.819330  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:11:04.819840  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:11:04.819865  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:11:04.820220  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:11:04.820422  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:11:04.820576  151348 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:11:04.822240  151348 fix.go:102] recreateIfNeeded on multinode-038823: state=Running err=<nil>
	W1004 01:11:04.822270  151348 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:11:04.824191  151348 out.go:177] * Updating the running kvm2 "multinode-038823" VM ...
	I1004 01:11:04.825432  151348 machine.go:88] provisioning docker machine ...
	I1004 01:11:04.825450  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:11:04.825628  151348 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:11:04.825780  151348 buildroot.go:166] provisioning hostname "multinode-038823"
	I1004 01:11:04.825796  151348 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:11:04.825971  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:11:04.828320  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:11:04.828744  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:11:04.828775  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:11:04.828905  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:11:04.829087  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:11:04.829256  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:11:04.829400  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:11:04.829601  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:11:04.829991  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:11:04.830012  151348 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-038823 && echo "multinode-038823" | sudo tee /etc/hostname
	I1004 01:11:23.146115  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:29.226199  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:32.298211  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:38.378167  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:41.450098  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:47.530123  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:50.602159  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:56.682150  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:11:59.754091  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:05.834265  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:08.906135  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:14.986205  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:18.058090  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:24.138182  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:27.210147  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:33.290189  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:36.362094  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:42.442153  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:45.514093  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:51.594151  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:12:54.666091  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:00.746101  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:03.818151  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:09.898213  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:12.970142  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:19.050145  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:22.122173  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:28.202144  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:31.274141  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:37.354168  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:40.426165  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:46.506170  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:49.578143  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:55.658153  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:13:58.730155  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:04.810181  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:07.882124  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:13.962195  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:17.034192  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:23.114162  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:26.186151  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:32.266119  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:35.338089  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:41.418135  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:44.490158  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:50.570154  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:53.642125  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:14:59.722125  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:02.794171  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:08.874193  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:11.946094  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:18.026171  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:21.098072  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:27.178164  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:30.250144  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:36.330155  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:39.402177  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:45.482136  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:48.554127  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:54.634188  151348 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.212:22: connect: no route to host
	I1004 01:15:57.636705  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:15:57.636766  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:15:57.638793  151348 machine.go:91] provisioned docker machine in 4m52.813341105s
	I1004 01:15:57.638836  151348 fix.go:56] fixHost completed within 4m52.834715656s
	I1004 01:15:57.638842  151348 start.go:83] releasing machines lock for "multinode-038823", held for 4m52.834733079s
	W1004 01:15:57.638860  151348 start.go:688] error starting host: provision: host is not running
	W1004 01:15:57.639093  151348 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1004 01:15:57.639105  151348 start.go:703] Will try again in 5 seconds ...
	I1004 01:16:02.641084  151348 start.go:365] acquiring machines lock for multinode-038823: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:16:02.641236  151348 start.go:369] acquired machines lock for "multinode-038823" in 93.572µs
	I1004 01:16:02.641275  151348 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:16:02.641289  151348 fix.go:54] fixHost starting: 
	I1004 01:16:02.641648  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:16:02.641671  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:16:02.657116  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38349
	I1004 01:16:02.657540  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:16:02.657988  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:16:02.658014  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:16:02.658325  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:16:02.658504  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:02.658641  151348 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:16:02.660288  151348 fix.go:102] recreateIfNeeded on multinode-038823: state=Stopped err=<nil>
	I1004 01:16:02.660313  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	W1004 01:16:02.660464  151348 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:16:02.662504  151348 out.go:177] * Restarting existing kvm2 VM for "multinode-038823" ...
	I1004 01:16:02.663853  151348 main.go:141] libmachine: (multinode-038823) Calling .Start
	I1004 01:16:02.664043  151348 main.go:141] libmachine: (multinode-038823) Ensuring networks are active...
	I1004 01:16:02.664880  151348 main.go:141] libmachine: (multinode-038823) Ensuring network default is active
	I1004 01:16:02.665225  151348 main.go:141] libmachine: (multinode-038823) Ensuring network mk-multinode-038823 is active
	I1004 01:16:02.665593  151348 main.go:141] libmachine: (multinode-038823) Getting domain xml...
	I1004 01:16:02.666214  151348 main.go:141] libmachine: (multinode-038823) Creating domain...
	I1004 01:16:03.899358  151348 main.go:141] libmachine: (multinode-038823) Waiting to get IP...
	I1004 01:16:03.900181  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:03.900647  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:03.900730  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:03.900642  152125 retry.go:31] will retry after 250.798787ms: waiting for machine to come up
	I1004 01:16:04.153304  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:04.153780  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:04.153805  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:04.153741  152125 retry.go:31] will retry after 323.889832ms: waiting for machine to come up
	I1004 01:16:04.479222  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:04.479707  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:04.479754  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:04.479647  152125 retry.go:31] will retry after 399.39052ms: waiting for machine to come up
	I1004 01:16:04.880085  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:04.880588  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:04.880620  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:04.880516  152125 retry.go:31] will retry after 543.85353ms: waiting for machine to come up
	I1004 01:16:05.426331  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:05.426777  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:05.426801  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:05.426731  152125 retry.go:31] will retry after 458.614006ms: waiting for machine to come up
	I1004 01:16:05.887314  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:05.887842  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:05.887871  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:05.887790  152125 retry.go:31] will retry after 892.768928ms: waiting for machine to come up
	I1004 01:16:06.781948  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:06.782318  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:06.782345  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:06.782287  152125 retry.go:31] will retry after 739.660075ms: waiting for machine to come up
	I1004 01:16:07.523231  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:07.523733  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:07.523765  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:07.523678  152125 retry.go:31] will retry after 1.253833837s: waiting for machine to come up
	I1004 01:16:08.779692  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:08.780170  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:08.780208  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:08.780139  152125 retry.go:31] will retry after 1.639608373s: waiting for machine to come up
	I1004 01:16:10.421902  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:10.422278  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:10.422313  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:10.422247  152125 retry.go:31] will retry after 1.607570778s: waiting for machine to come up
	I1004 01:16:12.032031  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:12.032485  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:12.032508  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:12.032439  152125 retry.go:31] will retry after 1.878388921s: waiting for machine to come up
	I1004 01:16:13.912893  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:13.913466  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:13.913498  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:13.913444  152125 retry.go:31] will retry after 3.421709531s: waiting for machine to come up
	I1004 01:16:17.339035  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:17.339405  151348 main.go:141] libmachine: (multinode-038823) DBG | unable to find current IP address of domain multinode-038823 in network mk-multinode-038823
	I1004 01:16:17.339435  151348 main.go:141] libmachine: (multinode-038823) DBG | I1004 01:16:17.339389  152125 retry.go:31] will retry after 3.349247965s: waiting for machine to come up
	I1004 01:16:20.692241  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.692801  151348 main.go:141] libmachine: (multinode-038823) Found IP for machine: 192.168.39.212
	I1004 01:16:20.692831  151348 main.go:141] libmachine: (multinode-038823) Reserving static IP address...
	I1004 01:16:20.692849  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has current primary IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.693372  151348 main.go:141] libmachine: (multinode-038823) Reserved static IP address: 192.168.39.212
	I1004 01:16:20.693416  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "multinode-038823", mac: "52:54:00:89:cd:1c", ip: "192.168.39.212"} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:20.693441  151348 main.go:141] libmachine: (multinode-038823) Waiting for SSH to be available...
	I1004 01:16:20.693471  151348 main.go:141] libmachine: (multinode-038823) DBG | skip adding static IP to network mk-multinode-038823 - found existing host DHCP lease matching {name: "multinode-038823", mac: "52:54:00:89:cd:1c", ip: "192.168.39.212"}
	I1004 01:16:20.693491  151348 main.go:141] libmachine: (multinode-038823) DBG | Getting to WaitForSSH function...
	I1004 01:16:20.695631  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.695943  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:20.695979  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.696132  151348 main.go:141] libmachine: (multinode-038823) DBG | Using SSH client type: external
	I1004 01:16:20.696163  151348 main.go:141] libmachine: (multinode-038823) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa (-rw-------)
	I1004 01:16:20.696199  151348 main.go:141] libmachine: (multinode-038823) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.212 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:16:20.696217  151348 main.go:141] libmachine: (multinode-038823) DBG | About to run SSH command:
	I1004 01:16:20.696225  151348 main.go:141] libmachine: (multinode-038823) DBG | exit 0
	I1004 01:16:20.795836  151348 main.go:141] libmachine: (multinode-038823) DBG | SSH cmd err, output: <nil>: 
	I1004 01:16:20.796210  151348 main.go:141] libmachine: (multinode-038823) Calling .GetConfigRaw
	I1004 01:16:20.796943  151348 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:16:20.799528  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.800016  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:20.800043  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.800382  151348 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:16:20.800588  151348 machine.go:88] provisioning docker machine ...
	I1004 01:16:20.800608  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:20.800807  151348 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:16:20.800977  151348 buildroot.go:166] provisioning hostname "multinode-038823"
	I1004 01:16:20.800990  151348 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:16:20.801125  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:20.803209  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.803639  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:20.803670  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.803846  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:20.804012  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:20.804167  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:20.804302  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:20.804501  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:16:20.804879  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:16:20.804893  151348 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-038823 && echo "multinode-038823" | sudo tee /etc/hostname
	I1004 01:16:20.946796  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-038823
	
	I1004 01:16:20.946829  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:20.949366  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.949687  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:20.949719  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:20.949871  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:20.950167  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:20.950376  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:20.950535  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:20.950757  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:16:20.951069  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:16:20.951097  151348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-038823' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-038823/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-038823' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:16:21.090350  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:16:21.090385  151348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:16:21.090407  151348 buildroot.go:174] setting up certificates
	I1004 01:16:21.090417  151348 provision.go:83] configureAuth start
	I1004 01:16:21.090430  151348 main.go:141] libmachine: (multinode-038823) Calling .GetMachineName
	I1004 01:16:21.090744  151348 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:16:21.093480  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.093833  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.093883  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.094059  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.096039  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.096390  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.096429  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.096527  151348 provision.go:138] copyHostCerts
	I1004 01:16:21.096562  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:16:21.096604  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:16:21.096617  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:16:21.096693  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:16:21.096797  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:16:21.096828  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:16:21.096840  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:16:21.096883  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:16:21.096948  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:16:21.096970  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:16:21.096979  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:16:21.097012  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:16:21.097071  151348 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.multinode-038823 san=[192.168.39.212 192.168.39.212 localhost 127.0.0.1 minikube multinode-038823]
	I1004 01:16:21.220216  151348 provision.go:172] copyRemoteCerts
	I1004 01:16:21.220285  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:16:21.220324  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.222999  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.223362  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.223386  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.223596  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:21.223794  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.223967  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:21.224110  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:16:21.319316  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 01:16:21.319390  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 01:16:21.342729  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 01:16:21.342817  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:16:21.365929  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 01:16:21.365999  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1004 01:16:21.389689  151348 provision.go:86] duration metric: configureAuth took 299.257212ms
	I1004 01:16:21.389715  151348 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:16:21.389966  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:16:21.390051  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.392876  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.393241  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.393277  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.393453  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:21.393713  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.393868  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.394008  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:21.394148  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:16:21.394506  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:16:21.394526  151348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:16:21.718614  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:16:21.718639  151348 machine.go:91] provisioned docker machine in 918.036804ms
	I1004 01:16:21.718649  151348 start.go:300] post-start starting for "multinode-038823" (driver="kvm2")
	I1004 01:16:21.718658  151348 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:16:21.718678  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:21.719001  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:16:21.719030  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.721730  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.722156  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.722183  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.722399  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:21.722620  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.722793  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:21.722969  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:16:21.816184  151348 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:16:21.820428  151348 command_runner.go:130] > NAME=Buildroot
	I1004 01:16:21.820447  151348 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1004 01:16:21.820452  151348 command_runner.go:130] > ID=buildroot
	I1004 01:16:21.820457  151348 command_runner.go:130] > VERSION_ID=2021.02.12
	I1004 01:16:21.820462  151348 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1004 01:16:21.820527  151348 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:16:21.820547  151348 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:16:21.820635  151348 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:16:21.820723  151348 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:16:21.820746  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /etc/ssl/certs/1355652.pem
	I1004 01:16:21.820847  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:16:21.830354  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:16:21.852691  151348 start.go:303] post-start completed in 134.025708ms
	I1004 01:16:21.852715  151348 fix.go:56] fixHost completed within 19.211424888s
	I1004 01:16:21.852735  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.855349  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.855875  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.855909  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.856097  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:21.856278  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.856482  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.856684  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:21.856842  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:16:21.857174  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.212 22 <nil> <nil>}
	I1004 01:16:21.857185  151348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:16:21.986572  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696382181.937474307
	
	I1004 01:16:21.986613  151348 fix.go:206] guest clock: 1696382181.937474307
	I1004 01:16:21.986629  151348 fix.go:219] Guest: 2023-10-04 01:16:21.937474307 +0000 UTC Remote: 2023-10-04 01:16:21.85271813 +0000 UTC m=+317.176813209 (delta=84.756177ms)
	I1004 01:16:21.986648  151348 fix.go:190] guest clock delta is within tolerance: 84.756177ms
	I1004 01:16:21.986655  151348 start.go:83] releasing machines lock for "multinode-038823", held for 19.345408425s
	I1004 01:16:21.986704  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:21.986968  151348 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:16:21.989468  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.989791  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.989827  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.990019  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:21.990663  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:21.990877  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:16:21.990960  151348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:16:21.991005  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.991152  151348 ssh_runner.go:195] Run: cat /version.json
	I1004 01:16:21.991200  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:16:21.993791  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.994139  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.994177  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.994202  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.994324  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:21.994500  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.994596  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:21.994627  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:21.994682  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:21.994778  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:16:21.994848  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:16:21.994956  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:16:21.995116  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:16:21.995254  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:16:22.083382  151348 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "0402681e4770013826956f326b174c70611f3073"}
	I1004 01:16:22.083584  151348 ssh_runner.go:195] Run: systemctl --version
	I1004 01:16:22.109646  151348 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 01:16:22.110469  151348 command_runner.go:130] > systemd 247 (247)
	I1004 01:16:22.110493  151348 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1004 01:16:22.110540  151348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:16:22.251139  151348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 01:16:22.257097  151348 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 01:16:22.257377  151348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:16:22.257442  151348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:16:22.271866  151348 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1004 01:16:22.272178  151348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:16:22.272198  151348 start.go:469] detecting cgroup driver to use...
	I1004 01:16:22.272258  151348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:16:22.287836  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:16:22.299008  151348 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:16:22.299062  151348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:16:22.311938  151348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:16:22.323949  151348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:16:22.337144  151348 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1004 01:16:22.428868  151348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:16:22.540206  151348 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1004 01:16:22.540247  151348 docker.go:213] disabling docker service ...
	I1004 01:16:22.540301  151348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:16:22.553663  151348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:16:22.565269  151348 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1004 01:16:22.565815  151348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:16:22.580346  151348 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1004 01:16:22.668102  151348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:16:22.681995  151348 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1004 01:16:22.682373  151348 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1004 01:16:22.770443  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:16:22.783371  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:16:22.800522  151348 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 01:16:22.800562  151348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:16:22.800610  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:16:22.811219  151348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:16:22.811289  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:16:22.822049  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:16:22.832382  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:16:22.842015  151348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:16:22.851672  151348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:16:22.860227  151348 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:16:22.860264  151348 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:16:22.860311  151348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:16:22.873759  151348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:16:22.882622  151348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:16:22.978168  151348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:16:23.133919  151348 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:16:23.133984  151348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:16:23.138577  151348 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 01:16:23.138603  151348 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 01:16:23.138629  151348 command_runner.go:130] > Device: 16h/22d	Inode: 795         Links: 1
	I1004 01:16:23.138640  151348 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:16:23.138648  151348 command_runner.go:130] > Access: 2023-10-04 01:16:23.068741754 +0000
	I1004 01:16:23.138658  151348 command_runner.go:130] > Modify: 2023-10-04 01:16:23.068741754 +0000
	I1004 01:16:23.138672  151348 command_runner.go:130] > Change: 2023-10-04 01:16:23.068741754 +0000
	I1004 01:16:23.138683  151348 command_runner.go:130] >  Birth: -
	I1004 01:16:23.138829  151348 start.go:537] Will wait 60s for crictl version
	I1004 01:16:23.138884  151348 ssh_runner.go:195] Run: which crictl
	I1004 01:16:23.142340  151348 command_runner.go:130] > /usr/bin/crictl
	I1004 01:16:23.142676  151348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:16:23.190064  151348 command_runner.go:130] > Version:  0.1.0
	I1004 01:16:23.190100  151348 command_runner.go:130] > RuntimeName:  cri-o
	I1004 01:16:23.190114  151348 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1004 01:16:23.190122  151348 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 01:16:23.190147  151348 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:16:23.190211  151348 ssh_runner.go:195] Run: crio --version
	I1004 01:16:23.231949  151348 command_runner.go:130] > crio version 1.24.1
	I1004 01:16:23.231980  151348 command_runner.go:130] > Version:          1.24.1
	I1004 01:16:23.231988  151348 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:16:23.231993  151348 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:16:23.231999  151348 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:16:23.232004  151348 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:16:23.232009  151348 command_runner.go:130] > Compiler:         gc
	I1004 01:16:23.232028  151348 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:16:23.232042  151348 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:16:23.232064  151348 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:16:23.232076  151348 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:16:23.232087  151348 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:16:23.233289  151348 ssh_runner.go:195] Run: crio --version
	I1004 01:16:23.282418  151348 command_runner.go:130] > crio version 1.24.1
	I1004 01:16:23.282444  151348 command_runner.go:130] > Version:          1.24.1
	I1004 01:16:23.282455  151348 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:16:23.282462  151348 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:16:23.282472  151348 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:16:23.282479  151348 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:16:23.282486  151348 command_runner.go:130] > Compiler:         gc
	I1004 01:16:23.282494  151348 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:16:23.282504  151348 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:16:23.282524  151348 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:16:23.282535  151348 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:16:23.282545  151348 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:16:23.287010  151348 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:16:23.288311  151348 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:16:23.290980  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:23.291336  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:16:23.291372  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:16:23.291530  151348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 01:16:23.295523  151348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:16:23.306958  151348 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:16:23.307013  151348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:16:23.342645  151348 command_runner.go:130] > {
	I1004 01:16:23.342669  151348 command_runner.go:130] >   "images": [
	I1004 01:16:23.342673  151348 command_runner.go:130] >     {
	I1004 01:16:23.342681  151348 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1004 01:16:23.342686  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:23.342691  151348 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1004 01:16:23.342695  151348 command_runner.go:130] >       ],
	I1004 01:16:23.342699  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:23.342712  151348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1004 01:16:23.342719  151348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1004 01:16:23.342724  151348 command_runner.go:130] >       ],
	I1004 01:16:23.342729  151348 command_runner.go:130] >       "size": "750414",
	I1004 01:16:23.342733  151348 command_runner.go:130] >       "uid": {
	I1004 01:16:23.342738  151348 command_runner.go:130] >         "value": "65535"
	I1004 01:16:23.342742  151348 command_runner.go:130] >       },
	I1004 01:16:23.342746  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:23.342754  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:23.342761  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:23.342765  151348 command_runner.go:130] >     }
	I1004 01:16:23.342779  151348 command_runner.go:130] >   ]
	I1004 01:16:23.342782  151348 command_runner.go:130] > }
	I1004 01:16:23.342890  151348 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 01:16:23.342939  151348 ssh_runner.go:195] Run: which lz4
	I1004 01:16:23.346791  151348 command_runner.go:130] > /usr/bin/lz4
	I1004 01:16:23.346817  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1004 01:16:23.346885  151348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:16:23.350806  151348 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:16:23.350832  151348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:16:23.350857  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 01:16:25.167574  151348 crio.go:444] Took 1.820704 seconds to copy over tarball
	I1004 01:16:25.167651  151348 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:16:27.906990  151348 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.739303912s)
	I1004 01:16:27.907025  151348 crio.go:451] Took 2.739424 seconds to extract the tarball
	I1004 01:16:27.907037  151348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:16:27.948235  151348 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:16:27.997871  151348 command_runner.go:130] > {
	I1004 01:16:27.997892  151348 command_runner.go:130] >   "images": [
	I1004 01:16:27.997896  151348 command_runner.go:130] >     {
	I1004 01:16:27.997904  151348 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1004 01:16:27.997909  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.997920  151348 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1004 01:16:27.997923  151348 command_runner.go:130] >       ],
	I1004 01:16:27.997929  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.997938  151348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1004 01:16:27.997945  151348 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1004 01:16:27.997949  151348 command_runner.go:130] >       ],
	I1004 01:16:27.997954  151348 command_runner.go:130] >       "size": "65258016",
	I1004 01:16:27.997960  151348 command_runner.go:130] >       "uid": null,
	I1004 01:16:27.997964  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.997976  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.997983  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.997987  151348 command_runner.go:130] >     },
	I1004 01:16:27.997991  151348 command_runner.go:130] >     {
	I1004 01:16:27.997999  151348 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1004 01:16:27.998005  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998011  151348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1004 01:16:27.998017  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998021  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998037  151348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1004 01:16:27.998047  151348 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1004 01:16:27.998053  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998060  151348 command_runner.go:130] >       "size": "31470524",
	I1004 01:16:27.998066  151348 command_runner.go:130] >       "uid": null,
	I1004 01:16:27.998070  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998074  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998080  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998083  151348 command_runner.go:130] >     },
	I1004 01:16:27.998090  151348 command_runner.go:130] >     {
	I1004 01:16:27.998096  151348 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1004 01:16:27.998101  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998106  151348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1004 01:16:27.998113  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998117  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998126  151348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1004 01:16:27.998134  151348 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1004 01:16:27.998140  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998146  151348 command_runner.go:130] >       "size": "53621675",
	I1004 01:16:27.998152  151348 command_runner.go:130] >       "uid": null,
	I1004 01:16:27.998156  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998163  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998167  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998170  151348 command_runner.go:130] >     },
	I1004 01:16:27.998174  151348 command_runner.go:130] >     {
	I1004 01:16:27.998180  151348 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1004 01:16:27.998186  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998195  151348 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1004 01:16:27.998201  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998205  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998215  151348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1004 01:16:27.998224  151348 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1004 01:16:27.998235  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998242  151348 command_runner.go:130] >       "size": "295456551",
	I1004 01:16:27.998245  151348 command_runner.go:130] >       "uid": {
	I1004 01:16:27.998250  151348 command_runner.go:130] >         "value": "0"
	I1004 01:16:27.998256  151348 command_runner.go:130] >       },
	I1004 01:16:27.998263  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998267  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998271  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998275  151348 command_runner.go:130] >     },
	I1004 01:16:27.998281  151348 command_runner.go:130] >     {
	I1004 01:16:27.998287  151348 command_runner.go:130] >       "id": "cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce",
	I1004 01:16:27.998293  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998299  151348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1004 01:16:27.998305  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998309  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998319  151348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631",
	I1004 01:16:27.998326  151348 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1004 01:16:27.998333  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998337  151348 command_runner.go:130] >       "size": "127149008",
	I1004 01:16:27.998343  151348 command_runner.go:130] >       "uid": {
	I1004 01:16:27.998347  151348 command_runner.go:130] >         "value": "0"
	I1004 01:16:27.998353  151348 command_runner.go:130] >       },
	I1004 01:16:27.998359  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998366  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998370  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998376  151348 command_runner.go:130] >     },
	I1004 01:16:27.998380  151348 command_runner.go:130] >     {
	I1004 01:16:27.998389  151348 command_runner.go:130] >       "id": "55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57",
	I1004 01:16:27.998395  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998400  151348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1004 01:16:27.998406  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998410  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998420  151348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4",
	I1004 01:16:27.998430  151348 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"
	I1004 01:16:27.998435  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998440  151348 command_runner.go:130] >       "size": "123171638",
	I1004 01:16:27.998446  151348 command_runner.go:130] >       "uid": {
	I1004 01:16:27.998450  151348 command_runner.go:130] >         "value": "0"
	I1004 01:16:27.998456  151348 command_runner.go:130] >       },
	I1004 01:16:27.998460  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998469  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998475  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998479  151348 command_runner.go:130] >     },
	I1004 01:16:27.998486  151348 command_runner.go:130] >     {
	I1004 01:16:27.998492  151348 command_runner.go:130] >       "id": "c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0",
	I1004 01:16:27.998498  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998505  151348 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1004 01:16:27.998511  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998515  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998524  151348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded",
	I1004 01:16:27.998534  151348 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"
	I1004 01:16:27.998539  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998544  151348 command_runner.go:130] >       "size": "74687895",
	I1004 01:16:27.998550  151348 command_runner.go:130] >       "uid": null,
	I1004 01:16:27.998554  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998561  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998565  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998571  151348 command_runner.go:130] >     },
	I1004 01:16:27.998576  151348 command_runner.go:130] >     {
	I1004 01:16:27.998584  151348 command_runner.go:130] >       "id": "7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8",
	I1004 01:16:27.998591  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998596  151348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1004 01:16:27.998602  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998606  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998629  151348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1004 01:16:27.998650  151348 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"
	I1004 01:16:27.998654  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998658  151348 command_runner.go:130] >       "size": "61485878",
	I1004 01:16:27.998662  151348 command_runner.go:130] >       "uid": {
	I1004 01:16:27.998667  151348 command_runner.go:130] >         "value": "0"
	I1004 01:16:27.998672  151348 command_runner.go:130] >       },
	I1004 01:16:27.998677  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998684  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998688  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998692  151348 command_runner.go:130] >     },
	I1004 01:16:27.998696  151348 command_runner.go:130] >     {
	I1004 01:16:27.998706  151348 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1004 01:16:27.998713  151348 command_runner.go:130] >       "repoTags": [
	I1004 01:16:27.998718  151348 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1004 01:16:27.998724  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998728  151348 command_runner.go:130] >       "repoDigests": [
	I1004 01:16:27.998737  151348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1004 01:16:27.998747  151348 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1004 01:16:27.998752  151348 command_runner.go:130] >       ],
	I1004 01:16:27.998757  151348 command_runner.go:130] >       "size": "750414",
	I1004 01:16:27.998763  151348 command_runner.go:130] >       "uid": {
	I1004 01:16:27.998767  151348 command_runner.go:130] >         "value": "65535"
	I1004 01:16:27.998773  151348 command_runner.go:130] >       },
	I1004 01:16:27.998777  151348 command_runner.go:130] >       "username": "",
	I1004 01:16:27.998781  151348 command_runner.go:130] >       "spec": null,
	I1004 01:16:27.998786  151348 command_runner.go:130] >       "pinned": false
	I1004 01:16:27.998789  151348 command_runner.go:130] >     }
	I1004 01:16:27.998795  151348 command_runner.go:130] >   ]
	I1004 01:16:27.998799  151348 command_runner.go:130] > }
	I1004 01:16:27.998904  151348 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:16:27.998916  151348 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:16:27.999000  151348 ssh_runner.go:195] Run: crio config
	I1004 01:16:28.054335  151348 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 01:16:28.054371  151348 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 01:16:28.054381  151348 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 01:16:28.054384  151348 command_runner.go:130] > #
	I1004 01:16:28.054391  151348 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 01:16:28.054397  151348 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 01:16:28.054404  151348 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 01:16:28.054410  151348 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 01:16:28.054418  151348 command_runner.go:130] > # reload'.
	I1004 01:16:28.054426  151348 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 01:16:28.054438  151348 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 01:16:28.054444  151348 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 01:16:28.054450  151348 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 01:16:28.054456  151348 command_runner.go:130] > [crio]
	I1004 01:16:28.054467  151348 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 01:16:28.054475  151348 command_runner.go:130] > # containers images, in this directory.
	I1004 01:16:28.054480  151348 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 01:16:28.054493  151348 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 01:16:28.054498  151348 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 01:16:28.054504  151348 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 01:16:28.054513  151348 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 01:16:28.054518  151348 command_runner.go:130] > storage_driver = "overlay"
	I1004 01:16:28.054524  151348 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 01:16:28.054530  151348 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 01:16:28.054535  151348 command_runner.go:130] > storage_option = [
	I1004 01:16:28.054540  151348 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 01:16:28.054544  151348 command_runner.go:130] > ]
	I1004 01:16:28.054551  151348 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 01:16:28.054559  151348 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 01:16:28.054563  151348 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 01:16:28.054571  151348 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 01:16:28.054577  151348 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 01:16:28.054586  151348 command_runner.go:130] > # always happen on a node reboot
	I1004 01:16:28.054590  151348 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 01:16:28.054596  151348 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 01:16:28.054602  151348 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 01:16:28.054614  151348 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 01:16:28.054621  151348 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1004 01:16:28.054629  151348 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 01:16:28.054647  151348 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 01:16:28.054653  151348 command_runner.go:130] > # internal_wipe = true
	I1004 01:16:28.054659  151348 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 01:16:28.054668  151348 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 01:16:28.054674  151348 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 01:16:28.054682  151348 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 01:16:28.054687  151348 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 01:16:28.054697  151348 command_runner.go:130] > [crio.api]
	I1004 01:16:28.054711  151348 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 01:16:28.054721  151348 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 01:16:28.054729  151348 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 01:16:28.054745  151348 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 01:16:28.054757  151348 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 01:16:28.054765  151348 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 01:16:28.054769  151348 command_runner.go:130] > # stream_port = "0"
	I1004 01:16:28.054776  151348 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 01:16:28.054783  151348 command_runner.go:130] > # stream_enable_tls = false
	I1004 01:16:28.054789  151348 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 01:16:28.054795  151348 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 01:16:28.054802  151348 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 01:16:28.054813  151348 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 01:16:28.054822  151348 command_runner.go:130] > # minutes.
	I1004 01:16:28.054829  151348 command_runner.go:130] > # stream_tls_cert = ""
	I1004 01:16:28.054843  151348 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 01:16:28.054854  151348 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 01:16:28.054865  151348 command_runner.go:130] > # stream_tls_key = ""
	I1004 01:16:28.054873  151348 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 01:16:28.054882  151348 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 01:16:28.054888  151348 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 01:16:28.054898  151348 command_runner.go:130] > # stream_tls_ca = ""
	I1004 01:16:28.054906  151348 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:16:28.054945  151348 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 01:16:28.054962  151348 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:16:28.054970  151348 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 01:16:28.055014  151348 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 01:16:28.055032  151348 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 01:16:28.055039  151348 command_runner.go:130] > [crio.runtime]
	I1004 01:16:28.055049  151348 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 01:16:28.055061  151348 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 01:16:28.055072  151348 command_runner.go:130] > # "nofile=1024:2048"
	I1004 01:16:28.055082  151348 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 01:16:28.055091  151348 command_runner.go:130] > # default_ulimits = [
	I1004 01:16:28.055097  151348 command_runner.go:130] > # ]
	I1004 01:16:28.055111  151348 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 01:16:28.055121  151348 command_runner.go:130] > # no_pivot = false
	I1004 01:16:28.055131  151348 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 01:16:28.055144  151348 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 01:16:28.055161  151348 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 01:16:28.055175  151348 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 01:16:28.055188  151348 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 01:16:28.055202  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:16:28.055213  151348 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 01:16:28.055224  151348 command_runner.go:130] > # Cgroup setting for conmon
	I1004 01:16:28.055236  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 01:16:28.055247  151348 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 01:16:28.055260  151348 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 01:16:28.055273  151348 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 01:16:28.055289  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:16:28.055298  151348 command_runner.go:130] > conmon_env = [
	I1004 01:16:28.055310  151348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 01:16:28.055318  151348 command_runner.go:130] > ]
	I1004 01:16:28.055327  151348 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 01:16:28.055338  151348 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 01:16:28.055351  151348 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 01:16:28.055362  151348 command_runner.go:130] > # default_env = [
	I1004 01:16:28.055371  151348 command_runner.go:130] > # ]
	I1004 01:16:28.055384  151348 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 01:16:28.055394  151348 command_runner.go:130] > # selinux = false
	I1004 01:16:28.055404  151348 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 01:16:28.055421  151348 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 01:16:28.055430  151348 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 01:16:28.055436  151348 command_runner.go:130] > # seccomp_profile = ""
	I1004 01:16:28.055442  151348 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 01:16:28.055450  151348 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 01:16:28.055459  151348 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 01:16:28.055463  151348 command_runner.go:130] > # which might increase security.
	I1004 01:16:28.055468  151348 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 01:16:28.055475  151348 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 01:16:28.055484  151348 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 01:16:28.055498  151348 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 01:16:28.055512  151348 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 01:16:28.055525  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:16:28.055536  151348 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 01:16:28.055553  151348 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 01:16:28.055566  151348 command_runner.go:130] > # the cgroup blockio controller.
	I1004 01:16:28.055574  151348 command_runner.go:130] > # blockio_config_file = ""
	I1004 01:16:28.055588  151348 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 01:16:28.055598  151348 command_runner.go:130] > # irqbalance daemon.
	I1004 01:16:28.055607  151348 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 01:16:28.055622  151348 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 01:16:28.055640  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:16:28.055650  151348 command_runner.go:130] > # rdt_config_file = ""
	I1004 01:16:28.055661  151348 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 01:16:28.055672  151348 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 01:16:28.055684  151348 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 01:16:28.055695  151348 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 01:16:28.055706  151348 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 01:16:28.055720  151348 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 01:16:28.055734  151348 command_runner.go:130] > # will be added.
	I1004 01:16:28.055744  151348 command_runner.go:130] > # default_capabilities = [
	I1004 01:16:28.055754  151348 command_runner.go:130] > # 	"CHOWN",
	I1004 01:16:28.055764  151348 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 01:16:28.055775  151348 command_runner.go:130] > # 	"FSETID",
	I1004 01:16:28.055782  151348 command_runner.go:130] > # 	"FOWNER",
	I1004 01:16:28.055792  151348 command_runner.go:130] > # 	"SETGID",
	I1004 01:16:28.055798  151348 command_runner.go:130] > # 	"SETUID",
	I1004 01:16:28.055808  151348 command_runner.go:130] > # 	"SETPCAP",
	I1004 01:16:28.055815  151348 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 01:16:28.055825  151348 command_runner.go:130] > # 	"KILL",
	I1004 01:16:28.055831  151348 command_runner.go:130] > # ]
	I1004 01:16:28.055845  151348 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 01:16:28.055858  151348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:16:28.055867  151348 command_runner.go:130] > # default_sysctls = [
	I1004 01:16:28.055877  151348 command_runner.go:130] > # ]
	I1004 01:16:28.055884  151348 command_runner.go:130] > # List of devices on the host that a
	I1004 01:16:28.055894  151348 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 01:16:28.055898  151348 command_runner.go:130] > # allowed_devices = [
	I1004 01:16:28.055904  151348 command_runner.go:130] > # 	"/dev/fuse",
	I1004 01:16:28.055908  151348 command_runner.go:130] > # ]
	I1004 01:16:28.055918  151348 command_runner.go:130] > # List of additional devices. specified as
	I1004 01:16:28.055956  151348 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 01:16:28.055963  151348 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 01:16:28.055997  151348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:16:28.056009  151348 command_runner.go:130] > # additional_devices = [
	I1004 01:16:28.056016  151348 command_runner.go:130] > # ]
	I1004 01:16:28.056026  151348 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 01:16:28.056036  151348 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 01:16:28.056043  151348 command_runner.go:130] > # 	"/etc/cdi",
	I1004 01:16:28.056053  151348 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 01:16:28.056059  151348 command_runner.go:130] > # ]
	I1004 01:16:28.056072  151348 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 01:16:28.056085  151348 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 01:16:28.056095  151348 command_runner.go:130] > # Defaults to false.
	I1004 01:16:28.056108  151348 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 01:16:28.056122  151348 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 01:16:28.056136  151348 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 01:16:28.056147  151348 command_runner.go:130] > # hooks_dir = [
	I1004 01:16:28.056161  151348 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 01:16:28.056170  151348 command_runner.go:130] > # ]
	I1004 01:16:28.056180  151348 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 01:16:28.056193  151348 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 01:16:28.056203  151348 command_runner.go:130] > # its default mounts from the following two files:
	I1004 01:16:28.056212  151348 command_runner.go:130] > #
	I1004 01:16:28.056223  151348 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 01:16:28.056237  151348 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 01:16:28.056250  151348 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 01:16:28.056259  151348 command_runner.go:130] > #
	I1004 01:16:28.056269  151348 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 01:16:28.056283  151348 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 01:16:28.056298  151348 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 01:16:28.056309  151348 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 01:16:28.056315  151348 command_runner.go:130] > #
	I1004 01:16:28.056323  151348 command_runner.go:130] > # default_mounts_file = ""
	I1004 01:16:28.056335  151348 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 01:16:28.056346  151348 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 01:16:28.056356  151348 command_runner.go:130] > pids_limit = 1024
	I1004 01:16:28.056370  151348 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 01:16:28.056384  151348 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 01:16:28.056398  151348 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 01:16:28.056414  151348 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 01:16:28.056424  151348 command_runner.go:130] > # log_size_max = -1
	I1004 01:16:28.056439  151348 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1004 01:16:28.056448  151348 command_runner.go:130] > # log_to_journald = false
	I1004 01:16:28.056458  151348 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 01:16:28.056469  151348 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 01:16:28.056480  151348 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 01:16:28.056489  151348 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 01:16:28.056502  151348 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 01:16:28.056512  151348 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 01:16:28.056524  151348 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 01:16:28.056534  151348 command_runner.go:130] > # read_only = false
	I1004 01:16:28.056547  151348 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 01:16:28.056561  151348 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 01:16:28.056576  151348 command_runner.go:130] > # live configuration reload.
	I1004 01:16:28.056583  151348 command_runner.go:130] > # log_level = "info"
	I1004 01:16:28.056596  151348 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 01:16:28.056608  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:16:28.056618  151348 command_runner.go:130] > # log_filter = ""
	I1004 01:16:28.056628  151348 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 01:16:28.056647  151348 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 01:16:28.056657  151348 command_runner.go:130] > # separated by comma.
	I1004 01:16:28.056667  151348 command_runner.go:130] > # uid_mappings = ""
	I1004 01:16:28.056680  151348 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 01:16:28.056690  151348 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 01:16:28.056697  151348 command_runner.go:130] > # separated by comma.
	I1004 01:16:28.056703  151348 command_runner.go:130] > # gid_mappings = ""
	I1004 01:16:28.056717  151348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 01:16:28.056728  151348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:16:28.056742  151348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:16:28.056752  151348 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 01:16:28.056762  151348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 01:16:28.056779  151348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:16:28.056794  151348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:16:28.056805  151348 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 01:16:28.056819  151348 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 01:16:28.056832  151348 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 01:16:28.056845  151348 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 01:16:28.056858  151348 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 01:16:28.056868  151348 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 01:16:28.056902  151348 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 01:16:28.056914  151348 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 01:16:28.056924  151348 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 01:16:28.056934  151348 command_runner.go:130] > drop_infra_ctr = false
	I1004 01:16:28.056945  151348 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 01:16:28.056957  151348 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 01:16:28.056973  151348 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 01:16:28.056983  151348 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 01:16:28.056996  151348 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 01:16:28.057008  151348 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 01:16:28.057020  151348 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 01:16:28.057035  151348 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 01:16:28.057046  151348 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 01:16:28.057059  151348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 01:16:28.057073  151348 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1004 01:16:28.057083  151348 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1004 01:16:28.057090  151348 command_runner.go:130] > # default_runtime = "runc"
	I1004 01:16:28.057105  151348 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 01:16:28.057123  151348 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 01:16:28.057142  151348 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1004 01:16:28.057154  151348 command_runner.go:130] > # creation as a file is not desired either.
	I1004 01:16:28.057166  151348 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 01:16:28.057175  151348 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 01:16:28.057183  151348 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 01:16:28.057192  151348 command_runner.go:130] > # ]
	I1004 01:16:28.057203  151348 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 01:16:28.057222  151348 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 01:16:28.057235  151348 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1004 01:16:28.057253  151348 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1004 01:16:28.057260  151348 command_runner.go:130] > #
	I1004 01:16:28.057266  151348 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1004 01:16:28.057274  151348 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1004 01:16:28.057281  151348 command_runner.go:130] > #  runtime_type = "oci"
	I1004 01:16:28.057293  151348 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1004 01:16:28.057302  151348 command_runner.go:130] > #  privileged_without_host_devices = false
	I1004 01:16:28.057314  151348 command_runner.go:130] > #  allowed_annotations = []
	I1004 01:16:28.057323  151348 command_runner.go:130] > # Where:
	I1004 01:16:28.057332  151348 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1004 01:16:28.057345  151348 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1004 01:16:28.057358  151348 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 01:16:28.057370  151348 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 01:16:28.057380  151348 command_runner.go:130] > #   in $PATH.
	I1004 01:16:28.057391  151348 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1004 01:16:28.057403  151348 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 01:16:28.057416  151348 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1004 01:16:28.057425  151348 command_runner.go:130] > #   state.
	I1004 01:16:28.057439  151348 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 01:16:28.057450  151348 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 01:16:28.057460  151348 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 01:16:28.057473  151348 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 01:16:28.057486  151348 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 01:16:28.057501  151348 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 01:16:28.057512  151348 command_runner.go:130] > #   The currently recognized values are:
	I1004 01:16:28.057528  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 01:16:28.057539  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 01:16:28.057551  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 01:16:28.057564  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 01:16:28.057580  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 01:16:28.057595  151348 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 01:16:28.057607  151348 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 01:16:28.057621  151348 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1004 01:16:28.057628  151348 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 01:16:28.057640  151348 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 01:16:28.057651  151348 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 01:16:28.057663  151348 command_runner.go:130] > runtime_type = "oci"
	I1004 01:16:28.057674  151348 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 01:16:28.057684  151348 command_runner.go:130] > runtime_config_path = ""
	I1004 01:16:28.057691  151348 command_runner.go:130] > monitor_path = ""
	I1004 01:16:28.057701  151348 command_runner.go:130] > monitor_cgroup = ""
	I1004 01:16:28.057711  151348 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 01:16:28.057740  151348 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1004 01:16:28.057752  151348 command_runner.go:130] > # running containers
	I1004 01:16:28.057760  151348 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1004 01:16:28.057774  151348 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1004 01:16:28.057831  151348 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1004 01:16:28.057855  151348 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1004 01:16:28.057865  151348 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1004 01:16:28.057877  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1004 01:16:28.057887  151348 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1004 01:16:28.057898  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1004 01:16:28.057909  151348 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1004 01:16:28.057919  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1004 01:16:28.057932  151348 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 01:16:28.057944  151348 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 01:16:28.057959  151348 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 01:16:28.057976  151348 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 01:16:28.057992  151348 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 01:16:28.058004  151348 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 01:16:28.058021  151348 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 01:16:28.058035  151348 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 01:16:28.058048  151348 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 01:16:28.058063  151348 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 01:16:28.058074  151348 command_runner.go:130] > # Example:
	I1004 01:16:28.058085  151348 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 01:16:28.058097  151348 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 01:16:28.058108  151348 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 01:16:28.058127  151348 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 01:16:28.058134  151348 command_runner.go:130] > # cpuset = 0
	I1004 01:16:28.058140  151348 command_runner.go:130] > # cpushares = "0-1"
	I1004 01:16:28.058149  151348 command_runner.go:130] > # Where:
	I1004 01:16:28.058161  151348 command_runner.go:130] > # The workload name is workload-type.
	I1004 01:16:28.058177  151348 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 01:16:28.058189  151348 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 01:16:28.058202  151348 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 01:16:28.058215  151348 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 01:16:28.058225  151348 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 01:16:28.058230  151348 command_runner.go:130] > # 
	I1004 01:16:28.058244  151348 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 01:16:28.058254  151348 command_runner.go:130] > #
	I1004 01:16:28.058264  151348 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 01:16:28.058277  151348 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 01:16:28.058291  151348 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 01:16:28.058302  151348 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 01:16:28.058311  151348 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 01:16:28.058315  151348 command_runner.go:130] > [crio.image]
	I1004 01:16:28.058329  151348 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 01:16:28.058342  151348 command_runner.go:130] > # default_transport = "docker://"
	I1004 01:16:28.058353  151348 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 01:16:28.058370  151348 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:16:28.058380  151348 command_runner.go:130] > # global_auth_file = ""
	I1004 01:16:28.058389  151348 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 01:16:28.058400  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:16:28.058410  151348 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1004 01:16:28.058416  151348 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 01:16:28.058429  151348 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:16:28.058445  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:16:28.058460  151348 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 01:16:28.058473  151348 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 01:16:28.058486  151348 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 01:16:28.058498  151348 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 01:16:28.058509  151348 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 01:16:28.058517  151348 command_runner.go:130] > # pause_command = "/pause"
	I1004 01:16:28.058527  151348 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 01:16:28.058541  151348 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 01:16:28.058555  151348 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 01:16:28.058569  151348 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 01:16:28.058587  151348 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 01:16:28.058594  151348 command_runner.go:130] > # signature_policy = ""
	I1004 01:16:28.058624  151348 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 01:16:28.058640  151348 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 01:16:28.058649  151348 command_runner.go:130] > # changing them here.
	I1004 01:16:28.058656  151348 command_runner.go:130] > # insecure_registries = [
	I1004 01:16:28.058662  151348 command_runner.go:130] > # ]
	I1004 01:16:28.058672  151348 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 01:16:28.058681  151348 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 01:16:28.058688  151348 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 01:16:28.058694  151348 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 01:16:28.058699  151348 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 01:16:28.058707  151348 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 01:16:28.058713  151348 command_runner.go:130] > # CNI plugins.
	I1004 01:16:28.058720  151348 command_runner.go:130] > [crio.network]
	I1004 01:16:28.058730  151348 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 01:16:28.058739  151348 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 01:16:28.058751  151348 command_runner.go:130] > # cni_default_network = ""
	I1004 01:16:28.058763  151348 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 01:16:28.058774  151348 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 01:16:28.058784  151348 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 01:16:28.058789  151348 command_runner.go:130] > # plugin_dirs = [
	I1004 01:16:28.058798  151348 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 01:16:28.058806  151348 command_runner.go:130] > # ]
	I1004 01:16:28.058819  151348 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 01:16:28.058831  151348 command_runner.go:130] > [crio.metrics]
	I1004 01:16:28.058843  151348 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 01:16:28.058853  151348 command_runner.go:130] > enable_metrics = true
	I1004 01:16:28.058864  151348 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 01:16:28.058873  151348 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 01:16:28.058886  151348 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 01:16:28.058900  151348 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 01:16:28.058914  151348 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 01:16:28.058924  151348 command_runner.go:130] > # metrics_collectors = [
	I1004 01:16:28.058935  151348 command_runner.go:130] > # 	"operations",
	I1004 01:16:28.058946  151348 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 01:16:28.058966  151348 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 01:16:28.058974  151348 command_runner.go:130] > # 	"operations_errors",
	I1004 01:16:28.058982  151348 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 01:16:28.058992  151348 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 01:16:28.059000  151348 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 01:16:28.059011  151348 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 01:16:28.059021  151348 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 01:16:28.059028  151348 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 01:16:28.059038  151348 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 01:16:28.059046  151348 command_runner.go:130] > # 	"containers_oom_total",
	I1004 01:16:28.059056  151348 command_runner.go:130] > # 	"containers_oom",
	I1004 01:16:28.059064  151348 command_runner.go:130] > # 	"processes_defunct",
	I1004 01:16:28.059072  151348 command_runner.go:130] > # 	"operations_total",
	I1004 01:16:28.059083  151348 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 01:16:28.059095  151348 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 01:16:28.059105  151348 command_runner.go:130] > # 	"operations_errors_total",
	I1004 01:16:28.059116  151348 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 01:16:28.059127  151348 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 01:16:28.059140  151348 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 01:16:28.059149  151348 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 01:16:28.059159  151348 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 01:16:28.059170  151348 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 01:16:28.059179  151348 command_runner.go:130] > # ]
	I1004 01:16:28.059188  151348 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 01:16:28.059199  151348 command_runner.go:130] > # metrics_port = 9090
	I1004 01:16:28.059210  151348 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 01:16:28.059220  151348 command_runner.go:130] > # metrics_socket = ""
	I1004 01:16:28.059232  151348 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 01:16:28.059245  151348 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 01:16:28.059254  151348 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 01:16:28.059265  151348 command_runner.go:130] > # certificate on any modification event.
	I1004 01:16:28.059275  151348 command_runner.go:130] > # metrics_cert = ""
	I1004 01:16:28.059284  151348 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 01:16:28.059296  151348 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 01:16:28.059305  151348 command_runner.go:130] > # metrics_key = ""
	I1004 01:16:28.059318  151348 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 01:16:28.059332  151348 command_runner.go:130] > [crio.tracing]
	I1004 01:16:28.059344  151348 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 01:16:28.059349  151348 command_runner.go:130] > # enable_tracing = false
	I1004 01:16:28.059360  151348 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 01:16:28.059372  151348 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 01:16:28.059382  151348 command_runner.go:130] > # Number of samples to collect per million spans.
	I1004 01:16:28.059393  151348 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 01:16:28.059406  151348 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 01:16:28.059415  151348 command_runner.go:130] > [crio.stats]
	I1004 01:16:28.059428  151348 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 01:16:28.059439  151348 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 01:16:28.059446  151348 command_runner.go:130] > # stats_collection_period = 0
	I1004 01:16:28.059481  151348 command_runner.go:130] ! time="2023-10-04 01:16:28.001220104Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1004 01:16:28.059502  151348 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 01:16:28.059627  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:16:28.059647  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:16:28.059665  151348 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:16:28.059694  151348 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.212 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-038823 NodeName:multinode-038823 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.212 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:16:28.059864  151348 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.212
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-038823"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.212
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:16:28.059955  151348 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-038823 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:16:28.060014  151348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:16:28.069697  151348 command_runner.go:130] > kubeadm
	I1004 01:16:28.069710  151348 command_runner.go:130] > kubectl
	I1004 01:16:28.069714  151348 command_runner.go:130] > kubelet
	I1004 01:16:28.069814  151348 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:16:28.069936  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:16:28.079160  151348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I1004 01:16:28.094819  151348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:16:28.110721  151348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1004 01:16:28.127364  151348 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1004 01:16:28.130935  151348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.212	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:16:28.142657  151348 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823 for IP: 192.168.39.212
	I1004 01:16:28.142700  151348 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:16:28.142904  151348 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:16:28.142962  151348 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:16:28.143043  151348 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key
	I1004 01:16:28.143120  151348 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key.543da273
	I1004 01:16:28.143186  151348 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key
	I1004 01:16:28.143197  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1004 01:16:28.143212  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1004 01:16:28.143225  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1004 01:16:28.143239  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1004 01:16:28.143251  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 01:16:28.143264  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 01:16:28.143275  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 01:16:28.143287  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 01:16:28.143335  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:16:28.143364  151348 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:16:28.143374  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:16:28.143399  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:16:28.143421  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:16:28.143445  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:16:28.143485  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:16:28.143509  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /usr/share/ca-certificates/1355652.pem
	I1004 01:16:28.143522  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:16:28.143537  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem -> /usr/share/ca-certificates/135565.pem
	I1004 01:16:28.144270  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:16:28.168229  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:16:28.191168  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:16:28.213888  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 01:16:28.235556  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:16:28.257699  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:16:28.279362  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:16:28.301646  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:16:28.324219  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:16:28.346409  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:16:28.368391  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:16:28.390377  151348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:16:28.411271  151348 ssh_runner.go:195] Run: openssl version
	I1004 01:16:28.416627  151348 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1004 01:16:28.416690  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:16:28.427499  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:16:28.432615  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:16:28.432672  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:16:28.432729  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:16:28.438669  151348 command_runner.go:130] > 3ec20f2e
	I1004 01:16:28.438762  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:16:28.449056  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:16:28.459364  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:16:28.463921  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:16:28.464058  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:16:28.464156  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:16:28.469384  151348 command_runner.go:130] > b5213941
	I1004 01:16:28.469624  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:16:28.480121  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:16:28.490894  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:16:28.495833  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:16:28.496101  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:16:28.496187  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:16:28.501403  151348 command_runner.go:130] > 51391683
	I1004 01:16:28.501729  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:16:28.511945  151348 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:16:28.516281  151348 command_runner.go:130] > ca.crt
	I1004 01:16:28.516306  151348 command_runner.go:130] > ca.key
	I1004 01:16:28.516315  151348 command_runner.go:130] > healthcheck-client.crt
	I1004 01:16:28.516331  151348 command_runner.go:130] > healthcheck-client.key
	I1004 01:16:28.516337  151348 command_runner.go:130] > peer.crt
	I1004 01:16:28.516343  151348 command_runner.go:130] > peer.key
	I1004 01:16:28.516349  151348 command_runner.go:130] > server.crt
	I1004 01:16:28.516355  151348 command_runner.go:130] > server.key
	I1004 01:16:28.516465  151348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:16:28.522255  151348 command_runner.go:130] > Certificate will not expire
	I1004 01:16:28.522432  151348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:16:28.528294  151348 command_runner.go:130] > Certificate will not expire
	I1004 01:16:28.528468  151348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:16:28.534626  151348 command_runner.go:130] > Certificate will not expire
	I1004 01:16:28.534899  151348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:16:28.540659  151348 command_runner.go:130] > Certificate will not expire
	I1004 01:16:28.540962  151348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:16:28.546805  151348 command_runner.go:130] > Certificate will not expire
	I1004 01:16:28.546882  151348 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:16:28.553176  151348 command_runner.go:130] > Certificate will not expire
	I1004 01:16:28.553268  151348 kubeadm.go:404] StartCluster: {Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:16:28.553383  151348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:16:28.553453  151348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:16:28.596265  151348 cri.go:89] found id: ""
	I1004 01:16:28.596350  151348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:16:28.607230  151348 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1004 01:16:28.607257  151348 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1004 01:16:28.607264  151348 command_runner.go:130] > /var/lib/minikube/etcd:
	I1004 01:16:28.607274  151348 command_runner.go:130] > member
	I1004 01:16:28.607397  151348 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1004 01:16:28.607418  151348 kubeadm.go:636] restartCluster start
	I1004 01:16:28.607481  151348 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 01:16:28.617902  151348 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:28.618594  151348 kubeconfig.go:92] found "multinode-038823" server: "https://192.168.39.212:8443"
	I1004 01:16:28.619197  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:16:28.619570  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:16:28.621223  151348 cert_rotation.go:137] Starting client certificate rotation controller
	I1004 01:16:28.621552  151348 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 01:16:28.631357  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:28.631436  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:28.644086  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:28.644110  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:28.644161  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:28.655575  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:29.156318  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:29.156396  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:29.168738  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:29.656391  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:29.656499  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:29.668418  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:30.156102  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:30.156180  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:30.168540  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:30.656082  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:30.656191  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:30.668656  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:31.156168  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:31.156261  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:31.170052  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:31.656543  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:31.656616  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:31.669948  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:32.156468  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:32.156594  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:32.169854  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:32.656515  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:32.656588  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:32.670072  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:33.155654  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:33.155769  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:33.169002  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:33.656621  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:33.656726  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:33.671676  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:34.156175  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:34.156269  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:34.169718  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:34.656291  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:34.656371  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:34.669902  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:35.155705  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:35.155795  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:35.167481  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:35.656035  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:35.656127  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:35.668290  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:36.155776  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:36.155878  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:36.169950  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:36.656627  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:36.656713  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:36.668909  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:37.156521  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:37.156622  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:37.168436  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:37.655914  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:37.656030  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:37.668183  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:38.155702  151348 api_server.go:166] Checking apiserver status ...
	I1004 01:16:38.155799  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:16:38.167763  151348 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:16:38.632443  151348 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1004 01:16:38.632498  151348 kubeadm.go:1128] stopping kube-system containers ...
	I1004 01:16:38.632516  151348 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 01:16:38.632586  151348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:16:38.676966  151348 cri.go:89] found id: ""
	I1004 01:16:38.677040  151348 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 01:16:38.693019  151348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:16:38.702192  151348 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1004 01:16:38.702219  151348 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1004 01:16:38.702230  151348 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1004 01:16:38.702242  151348 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:16:38.702297  151348 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:16:38.702356  151348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:16:38.711303  151348 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1004 01:16:38.711338  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:16:38.828212  151348 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:16:38.828572  151348 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1004 01:16:38.829089  151348 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1004 01:16:38.829645  151348 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:16:38.830373  151348 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1004 01:16:38.830924  151348 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:16:38.831799  151348 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1004 01:16:38.832318  151348 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1004 01:16:38.832856  151348 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:16:38.833310  151348 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:16:38.833733  151348 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:16:38.834487  151348 command_runner.go:130] > [certs] Using the existing "sa" key
	I1004 01:16:38.836001  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:16:38.890395  151348 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:16:39.014252  151348 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:16:39.188755  151348 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:16:39.328429  151348 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:16:39.492719  151348 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:16:39.495402  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:16:39.561100  151348 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:16:39.564307  151348 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:16:39.564818  151348 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1004 01:16:39.684939  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:16:39.770908  151348 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:16:39.770935  151348 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:16:39.770945  151348 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:16:39.770955  151348 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:16:39.771249  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:16:39.837846  151348 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:16:39.841191  151348 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:16:39.841275  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:39.861346  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:40.380634  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:40.880714  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:41.380846  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:41.881351  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:42.381453  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:16:42.405021  151348 command_runner.go:130] > 1119
	I1004 01:16:42.405300  151348 api_server.go:72] duration metric: took 2.564109063s to wait for apiserver process to appear ...
	I1004 01:16:42.405323  151348 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:16:42.405348  151348 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:16:46.200969  151348 api_server.go:279] https://192.168.39.212:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:16:46.201009  151348 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:16:46.201024  151348 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:16:46.260378  151348 api_server.go:279] https://192.168.39.212:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:16:46.260416  151348 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:16:46.761114  151348 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:16:46.766509  151348 api_server.go:279] https://192.168.39.212:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:16:46.766536  151348 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:16:47.261186  151348 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:16:47.269491  151348 api_server.go:279] https://192.168.39.212:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:16:47.269556  151348 api_server.go:103] status: https://192.168.39.212:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:16:47.760564  151348 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:16:47.765183  151348 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1004 01:16:47.765273  151348 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1004 01:16:47.765282  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:47.765296  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:47.765304  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:47.772675  151348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 01:16:47.772700  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:47.772709  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:47.772716  151348 round_trippers.go:580]     Content-Length: 263
	I1004 01:16:47.772724  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:47 GMT
	I1004 01:16:47.772733  151348 round_trippers.go:580]     Audit-Id: c6df8685-eb53-4893-b08d-cd8c1def9c94
	I1004 01:16:47.772745  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:47.772757  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:47.772769  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:47.772827  151348 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1004 01:16:47.772920  151348 api_server.go:141] control plane version: v1.28.2
	I1004 01:16:47.772940  151348 api_server.go:131] duration metric: took 5.367608443s to wait for apiserver health ...
	I1004 01:16:47.772951  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:16:47.772965  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:16:47.775083  151348 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 01:16:47.776702  151348 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 01:16:47.785720  151348 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1004 01:16:47.785764  151348 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1004 01:16:47.785776  151348 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1004 01:16:47.785786  151348 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:16:47.785796  151348 command_runner.go:130] > Access: 2023-10-04 01:16:15.620741754 +0000
	I1004 01:16:47.785810  151348 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1004 01:16:47.785826  151348 command_runner.go:130] > Change: 2023-10-04 01:16:13.762741754 +0000
	I1004 01:16:47.785859  151348 command_runner.go:130] >  Birth: -
	I1004 01:16:47.785930  151348 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 01:16:47.785947  151348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1004 01:16:47.818177  151348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 01:16:49.102569  151348 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:16:49.102600  151348 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:16:49.102610  151348 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1004 01:16:49.102618  151348 command_runner.go:130] > daemonset.apps/kindnet configured
	I1004 01:16:49.102931  151348 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.284711871s)
	I1004 01:16:49.102965  151348 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:16:49.103060  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:16:49.103068  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.103076  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.103082  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.119622  151348 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1004 01:16:49.119654  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.119664  151348 round_trippers.go:580]     Audit-Id: 01a335ba-254f-4a6d-a78e-9d795bb7c82a
	I1004 01:16:49.119687  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.119695  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.119706  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.119719  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.119726  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.120757  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"791"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83254 chars]
	I1004 01:16:49.124633  151348 system_pods.go:59] 12 kube-system pods found
	I1004 01:16:49.124675  151348 system_pods.go:61] "coredns-5dd5756b68-xbln6" [956d98ac-25cb-4d19-a9c7-c3a9682eff67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 01:16:49.124688  151348 system_pods.go:61] "etcd-multinode-038823" [040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 01:16:49.124696  151348 system_pods.go:61] "kindnet-cqczw" [e511e913-b479-4024-9942-72775656744a] Running
	I1004 01:16:49.124705  151348 system_pods.go:61] "kindnet-prsst" [1775280f-c3e2-4162-9287-9b58a90c8f83] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1004 01:16:49.124715  151348 system_pods.go:61] "kindnet-zg29t" [94d43c66-bea3-44a0-bbf2-85a553e012b0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1004 01:16:49.124729  151348 system_pods.go:61] "kube-apiserver-multinode-038823" [8f46d14f-fac3-4029-af40-ad242d6e93e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 01:16:49.124742  151348 system_pods.go:61] "kube-controller-manager-multinode-038823" [ace8ff54-191a-4969-bc58-ad0440f25084] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 01:16:49.124758  151348 system_pods.go:61] "kube-proxy-hgg2z" [28d3f9c9-4eb8-4c36-81b0-1726a87d20a6] Running
	I1004 01:16:49.124766  151348 system_pods.go:61] "kube-proxy-psqss" [455f6f13-5661-4b4e-847b-9266e44c03d8] Running
	I1004 01:16:49.124779  151348 system_pods.go:61] "kube-proxy-pz9j4" [36f00e2f-5611-43ae-94b5-d9dde6784128] Running
	I1004 01:16:49.124795  151348 system_pods.go:61] "kube-scheduler-multinode-038823" [2da95c67-ae74-41db-a746-455fa043f9a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 01:16:49.124819  151348 system_pods.go:61] "storage-provisioner" [b4bd2f00-0b17-47da-add0-486f8232ea80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 01:16:49.124835  151348 system_pods.go:74] duration metric: took 21.859577ms to wait for pod list to return data ...
	I1004 01:16:49.124849  151348 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:16:49.124944  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1004 01:16:49.124953  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.124964  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.124972  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.132081  151348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 01:16:49.132100  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.132111  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.132120  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.132129  151348 round_trippers.go:580]     Audit-Id: 9de64609-9589-4c29-a27e-a552e6e299a3
	I1004 01:16:49.132139  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.132148  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.132153  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.135382  151348 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"792"},"items":[{"metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15251 chars]
	I1004 01:16:49.136244  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:16:49.136277  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:16:49.136320  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:16:49.136328  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:16:49.136333  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:16:49.136339  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:16:49.136344  151348 node_conditions.go:105] duration metric: took 11.488046ms to run NodePressure ...
	I1004 01:16:49.136387  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:16:49.359375  151348 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1004 01:16:49.423232  151348 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1004 01:16:49.424844  151348 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1004 01:16:49.424985  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1004 01:16:49.424999  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.425011  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.425022  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.428018  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.428042  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.428053  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.428060  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.428066  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.428072  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.428077  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.428082  151348 round_trippers.go:580]     Audit-Id: c153b361-51a4-4f10-a441-f6231ef346a5
	I1004 01:16:49.429011  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"794"},"items":[{"metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"762","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I1004 01:16:49.430006  151348 kubeadm.go:787] kubelet initialised
	I1004 01:16:49.430030  151348 kubeadm.go:788] duration metric: took 5.15971ms waiting for restarted kubelet to initialise ...
	I1004 01:16:49.430074  151348 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:16:49.430159  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:16:49.430170  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.430183  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.430192  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.433392  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:49.433406  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.433412  151348 round_trippers.go:580]     Audit-Id: 4d788d20-17ff-447f-8b22-cbdf1386176c
	I1004 01:16:49.433417  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.433424  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.433433  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.433441  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.433450  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.435147  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"794"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83254 chars]
	I1004 01:16:49.437512  151348 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:49.437610  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:49.437620  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.437631  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.437641  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.439670  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.439687  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.439693  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.439698  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.439704  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.439711  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.439716  151348 round_trippers.go:580]     Audit-Id: 91371b0c-e526-419f-bae5-0e23b5357185
	I1004 01:16:49.439724  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.439832  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:49.440286  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:49.440301  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.440311  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.440322  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.442529  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.442548  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.442559  151348 round_trippers.go:580]     Audit-Id: 8270b139-612b-4693-a3a0-6cc4a142cd7a
	I1004 01:16:49.442568  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.442576  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.442587  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.442596  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.442607  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.442705  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:49.443100  151348 pod_ready.go:97] node "multinode-038823" hosting pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.443123  151348 pod_ready.go:81] duration metric: took 5.586031ms waiting for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	E1004 01:16:49.443135  151348 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-038823" hosting pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.443148  151348 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:49.443210  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-038823
	I1004 01:16:49.443221  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.443232  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.443244  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.445369  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.445387  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.445397  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.445405  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.445413  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.445429  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.445437  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.445444  151348 round_trippers.go:580]     Audit-Id: 45222cf4-a429-4c0f-b5f6-3eb91fd0d99c
	I1004 01:16:49.445678  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"762","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I1004 01:16:49.446163  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:49.446178  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.446185  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.446190  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.448117  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:16:49.448145  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.448154  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.448162  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.448170  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.448178  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.448186  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.448194  151348 round_trippers.go:580]     Audit-Id: 49e2c540-956b-49c5-99d5-cdd0b78f3b10
	I1004 01:16:49.448438  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:49.448832  151348 pod_ready.go:97] node "multinode-038823" hosting pod "etcd-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.448863  151348 pod_ready.go:81] duration metric: took 5.698425ms waiting for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	E1004 01:16:49.448880  151348 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-038823" hosting pod "etcd-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.448899  151348 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:49.448946  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-038823
	I1004 01:16:49.448953  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.448960  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.448966  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.451210  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.451229  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.451238  151348 round_trippers.go:580]     Audit-Id: fd10806e-3638-45e2-8eb9-3c1f49f51d6a
	I1004 01:16:49.451247  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.451257  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.451270  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.451278  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.451285  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.451681  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-038823","namespace":"kube-system","uid":"8f46d14f-fac3-4029-af40-ad242d6e93e1","resourceVersion":"763","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.mirror":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.seen":"2023-10-04T01:06:24.071714521Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I1004 01:16:49.452148  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:49.452166  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.452175  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.452184  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.454617  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.454636  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.454645  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.454654  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.454662  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.454670  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.454683  151348 round_trippers.go:580]     Audit-Id: 4aad6730-75d2-4801-a331-a596a1b7646d
	I1004 01:16:49.454692  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.455908  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:49.456288  151348 pod_ready.go:97] node "multinode-038823" hosting pod "kube-apiserver-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.456311  151348 pod_ready.go:81] duration metric: took 7.400492ms waiting for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	E1004 01:16:49.456322  151348 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-038823" hosting pod "kube-apiserver-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.456332  151348 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:49.456394  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-038823
	I1004 01:16:49.456400  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.456411  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.456421  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.458187  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:16:49.458207  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.458216  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.458224  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.458232  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.458241  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.458251  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.458264  151348 round_trippers.go:580]     Audit-Id: 9e47d3a2-37ba-4a1c-bd85-593d242ee52b
	I1004 01:16:49.458614  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-038823","namespace":"kube-system","uid":"ace8ff54-191a-4969-bc58-ad0440f25084","resourceVersion":"767","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.mirror":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.seen":"2023-10-04T01:06:24.071715949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I1004 01:16:49.503338  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:49.503357  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.503366  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.503372  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.505953  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.505976  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.505986  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.505995  151348 round_trippers.go:580]     Audit-Id: ce6d3fdc-05c4-4e6d-9c14-760229e1a959
	I1004 01:16:49.506003  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.506011  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.506020  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.506027  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.506518  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:49.506901  151348 pod_ready.go:97] node "multinode-038823" hosting pod "kube-controller-manager-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.506924  151348 pod_ready.go:81] duration metric: took 50.577047ms waiting for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	E1004 01:16:49.506937  151348 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-038823" hosting pod "kube-controller-manager-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:49.506949  151348 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:49.703321  151348 request.go:629] Waited for 196.289671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:16:49.703393  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:16:49.703398  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.703406  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.703412  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.706565  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:49.706585  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.706594  151348 round_trippers.go:580]     Audit-Id: e3596823-0183-4664-b11c-52be7d48d71f
	I1004 01:16:49.706604  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.706612  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.706621  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.706629  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.706637  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.706900  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"505","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1004 01:16:49.903280  151348 request.go:629] Waited for 195.906033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:16:49.903346  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:16:49.903351  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:49.903359  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:49.903365  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:49.905885  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:49.905903  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:49.905911  151348 round_trippers.go:580]     Audit-Id: b2a9469e-70af-44dd-878f-72fff8ec488d
	I1004 01:16:49.905921  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:49.905929  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:49.905938  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:49.905947  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:49.905955  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:49 GMT
	I1004 01:16:49.906142  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"758","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I1004 01:16:49.906430  151348 pod_ready.go:92] pod "kube-proxy-hgg2z" in "kube-system" namespace has status "Ready":"True"
	I1004 01:16:49.906446  151348 pod_ready.go:81] duration metric: took 399.489978ms waiting for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:49.906456  151348 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:50.103945  151348 request.go:629] Waited for 197.357349ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:16:50.104023  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:16:50.104029  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:50.104039  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:50.104053  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:50.107011  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:50.107033  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:50.107044  151348 round_trippers.go:580]     Audit-Id: 6533f9f4-155f-44ab-98a1-b5a608685eab
	I1004 01:16:50.107053  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:50.107062  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:50.107071  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:50.107077  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:50.107085  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:50 GMT
	I1004 01:16:50.107435  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psqss","generateName":"kube-proxy-","namespace":"kube-system","uid":"455f6f13-5661-4b4e-847b-9266e44c03d8","resourceVersion":"712","creationTimestamp":"2023-10-04T01:08:09Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1004 01:16:50.303163  151348 request.go:629] Waited for 195.298731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:16:50.303249  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:16:50.303254  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:50.303262  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:50.303272  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:50.305673  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:50.305699  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:50.305709  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:50.305718  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:50.305726  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:50 GMT
	I1004 01:16:50.305734  151348 round_trippers.go:580]     Audit-Id: a4070279-50ad-4a03-9f90-759de203bba8
	I1004 01:16:50.305742  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:50.305751  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:50.306083  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m03","uid":"aecf3685-48bc-4468-b845-c7c671e5cd13","resourceVersion":"792","creationTimestamp":"2023-10-04T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I1004 01:16:50.306381  151348 pod_ready.go:92] pod "kube-proxy-psqss" in "kube-system" namespace has status "Ready":"True"
	I1004 01:16:50.306400  151348 pod_ready.go:81] duration metric: took 399.936997ms waiting for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:50.306413  151348 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:50.503891  151348 request.go:629] Waited for 197.392942ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:16:50.503964  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:16:50.503970  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:50.503981  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:50.503987  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:50.507540  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:50.507567  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:50.507574  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:50.507580  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:50.507585  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:50 GMT
	I1004 01:16:50.507590  151348 round_trippers.go:580]     Audit-Id: 7d037c39-315a-4b86-a77c-5b857fa987ef
	I1004 01:16:50.507595  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:50.507602  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:50.507864  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pz9j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"36f00e2f-5611-43ae-94b5-d9dde6784128","resourceVersion":"791","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1004 01:16:50.703676  151348 request.go:629] Waited for 195.357011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:50.703794  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:50.703822  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:50.703835  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:50.703846  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:50.706950  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:50.706971  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:50.706981  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:50 GMT
	I1004 01:16:50.706990  151348 round_trippers.go:580]     Audit-Id: ca23a8f8-7100-49ad-a2ab-b67733efa849
	I1004 01:16:50.706997  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:50.707006  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:50.707014  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:50.707025  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:50.707203  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:50.707537  151348 pod_ready.go:97] node "multinode-038823" hosting pod "kube-proxy-pz9j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:50.707556  151348 pod_ready.go:81] duration metric: took 401.135767ms waiting for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	E1004 01:16:50.707566  151348 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-038823" hosting pod "kube-proxy-pz9j4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:50.707572  151348 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:50.903986  151348 request.go:629] Waited for 196.345286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:16:50.904084  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:16:50.904093  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:50.904110  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:50.904121  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:50.907751  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:50.907775  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:50.907784  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:50 GMT
	I1004 01:16:50.907792  151348 round_trippers.go:580]     Audit-Id: c0dddba7-adc1-4bb6-bf09-cca1ad678324
	I1004 01:16:50.907806  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:50.907814  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:50.907823  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:50.907832  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:50.908140  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-038823","namespace":"kube-system","uid":"2da95c67-ae74-41db-a746-455fa043f9a7","resourceVersion":"761","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.mirror":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.seen":"2023-10-04T01:06:24.071717021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I1004 01:16:51.103904  151348 request.go:629] Waited for 195.382508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:51.103991  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:51.104003  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:51.104015  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:51.104023  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:51.107119  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:51.107139  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:51.107150  151348 round_trippers.go:580]     Audit-Id: 7d0774c1-39ab-474e-ba13-9608ce7c4ec2
	I1004 01:16:51.107157  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:51.107166  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:51.107174  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:51.107184  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:51.107198  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:51 GMT
	I1004 01:16:51.107479  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:51.107806  151348 pod_ready.go:97] node "multinode-038823" hosting pod "kube-scheduler-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:51.107826  151348 pod_ready.go:81] duration metric: took 400.246426ms waiting for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	E1004 01:16:51.107839  151348 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-038823" hosting pod "kube-scheduler-multinode-038823" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-038823" has status "Ready":"False"
	I1004 01:16:51.107850  151348 pod_ready.go:38] duration metric: took 1.677758553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:16:51.107877  151348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:16:51.120550  151348 command_runner.go:130] > -16
	I1004 01:16:51.121155  151348 ops.go:34] apiserver oom_adj: -16
	I1004 01:16:51.121177  151348 kubeadm.go:640] restartCluster took 22.51375019s
	I1004 01:16:51.121188  151348 kubeadm.go:406] StartCluster complete in 22.567937323s
	I1004 01:16:51.121213  151348 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:16:51.121310  151348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:16:51.121960  151348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:16:51.122214  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:16:51.122416  151348 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:16:51.125216  151348 out.go:177] * Enabled addons: 
	I1004 01:16:51.122564  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:16:51.122563  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:16:51.126608  151348 addons.go:502] enable addons completed in 4.201356ms: enabled=[]
	I1004 01:16:51.126896  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:16:51.127247  151348 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:16:51.127260  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:51.127271  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:51.127279  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:51.130402  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:51.130417  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:51.130426  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:51.130435  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:51.130441  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:51.130447  151348 round_trippers.go:580]     Content-Length: 291
	I1004 01:16:51.130452  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:51 GMT
	I1004 01:16:51.130457  151348 round_trippers.go:580]     Audit-Id: c2480db5-8617-4ead-8fd0-cd1c17cb0bae
	I1004 01:16:51.130462  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:51.130503  151348 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"793","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1004 01:16:51.130695  151348 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-038823" context rescaled to 1 replicas
	I1004 01:16:51.130757  151348 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:16:51.133264  151348 out.go:177] * Verifying Kubernetes components...
	I1004 01:16:51.134650  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:16:51.228277  151348 command_runner.go:130] > apiVersion: v1
	I1004 01:16:51.228313  151348 command_runner.go:130] > data:
	I1004 01:16:51.228322  151348 command_runner.go:130] >   Corefile: |
	I1004 01:16:51.228328  151348 command_runner.go:130] >     .:53 {
	I1004 01:16:51.228355  151348 command_runner.go:130] >         log
	I1004 01:16:51.228364  151348 command_runner.go:130] >         errors
	I1004 01:16:51.228370  151348 command_runner.go:130] >         health {
	I1004 01:16:51.228378  151348 command_runner.go:130] >            lameduck 5s
	I1004 01:16:51.228384  151348 command_runner.go:130] >         }
	I1004 01:16:51.228395  151348 command_runner.go:130] >         ready
	I1004 01:16:51.228405  151348 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1004 01:16:51.228416  151348 command_runner.go:130] >            pods insecure
	I1004 01:16:51.228425  151348 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1004 01:16:51.228434  151348 command_runner.go:130] >            ttl 30
	I1004 01:16:51.228439  151348 command_runner.go:130] >         }
	I1004 01:16:51.228448  151348 command_runner.go:130] >         prometheus :9153
	I1004 01:16:51.228455  151348 command_runner.go:130] >         hosts {
	I1004 01:16:51.228466  151348 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1004 01:16:51.228473  151348 command_runner.go:130] >            fallthrough
	I1004 01:16:51.228482  151348 command_runner.go:130] >         }
	I1004 01:16:51.228490  151348 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1004 01:16:51.228501  151348 command_runner.go:130] >            max_concurrent 1000
	I1004 01:16:51.228511  151348 command_runner.go:130] >         }
	I1004 01:16:51.228520  151348 command_runner.go:130] >         cache 30
	I1004 01:16:51.228528  151348 command_runner.go:130] >         loop
	I1004 01:16:51.228538  151348 command_runner.go:130] >         reload
	I1004 01:16:51.228547  151348 command_runner.go:130] >         loadbalance
	I1004 01:16:51.228557  151348 command_runner.go:130] >     }
	I1004 01:16:51.228563  151348 command_runner.go:130] > kind: ConfigMap
	I1004 01:16:51.228573  151348 command_runner.go:130] > metadata:
	I1004 01:16:51.228581  151348 command_runner.go:130] >   creationTimestamp: "2023-10-04T01:06:23Z"
	I1004 01:16:51.228590  151348 command_runner.go:130] >   name: coredns
	I1004 01:16:51.228596  151348 command_runner.go:130] >   namespace: kube-system
	I1004 01:16:51.228602  151348 command_runner.go:130] >   resourceVersion: "387"
	I1004 01:16:51.228609  151348 command_runner.go:130] >   uid: 868b8069-9cac-4d4d-8503-3a3cef90175c
	I1004 01:16:51.231999  151348 node_ready.go:35] waiting up to 6m0s for node "multinode-038823" to be "Ready" ...
	I1004 01:16:51.232190  151348 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1004 01:16:51.303392  151348 request.go:629] Waited for 71.286547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:51.303471  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:51.303476  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:51.303484  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:51.303492  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:51.306485  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:51.306497  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:51.306503  151348 round_trippers.go:580]     Audit-Id: 1196abbc-ce49-4764-8803-eed84f1ea266
	I1004 01:16:51.306509  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:51.306514  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:51.306519  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:51.306523  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:51.306528  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:51 GMT
	I1004 01:16:51.307023  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:51.503734  151348 request.go:629] Waited for 196.35531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:51.503804  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:51.503808  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:51.503816  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:51.503822  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:51.507954  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:16:51.507974  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:51.507980  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:51.507986  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:51 GMT
	I1004 01:16:51.507991  151348 round_trippers.go:580]     Audit-Id: 659e28e7-36e2-4d23-b645-5a1e531dbcb7
	I1004 01:16:51.507997  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:51.508001  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:51.508007  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:51.508195  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:52.009400  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:52.009422  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:52.009431  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:52.009437  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:52.018705  151348 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1004 01:16:52.018727  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:52.018733  151348 round_trippers.go:580]     Audit-Id: 777ba1d7-2ecf-4153-ba68-7992d666f4bc
	I1004 01:16:52.018739  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:52.018744  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:52.018749  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:52.018754  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:52.018761  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:51 GMT
	I1004 01:16:52.018920  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:52.509660  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:52.509687  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:52.509695  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:52.509705  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:52.512538  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:52.512569  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:52.512580  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:52.512589  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:52 GMT
	I1004 01:16:52.512594  151348 round_trippers.go:580]     Audit-Id: e95c4596-a485-4268-9626-6a47d15f6a3d
	I1004 01:16:52.512602  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:52.512607  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:52.512616  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:52.513201  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:53.009325  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:53.009361  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:53.009370  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:53.009376  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:53.012782  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:53.012818  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:53.012827  151348 round_trippers.go:580]     Audit-Id: c8b8bf5a-e0f0-4eb3-86b1-664883a4bda0
	I1004 01:16:53.012833  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:53.012839  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:53.012844  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:53.012853  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:53.012858  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:52 GMT
	I1004 01:16:53.013081  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:53.509762  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:53.509791  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:53.509809  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:53.509815  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:53.513463  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:53.513491  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:53.513502  151348 round_trippers.go:580]     Audit-Id: 4d524758-8fc1-4da2-a790-11a43bff4d4b
	I1004 01:16:53.513510  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:53.513518  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:53.513528  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:53.513535  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:53.513549  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:53 GMT
	I1004 01:16:53.514439  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:53.514740  151348 node_ready.go:58] node "multinode-038823" has status "Ready":"False"
	I1004 01:16:54.009142  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:54.009173  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:54.009181  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:54.009187  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:54.012853  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:54.012881  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:54.012888  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:54.012895  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:54.012900  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:53 GMT
	I1004 01:16:54.012911  151348 round_trippers.go:580]     Audit-Id: 494ece98-41a4-48a8-bb77-56c0ae6a5026
	I1004 01:16:54.012919  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:54.012925  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:54.013416  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:54.509058  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:54.509079  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:54.509087  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:54.509094  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:54.511863  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:54.511884  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:54.511894  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:54.511903  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:54 GMT
	I1004 01:16:54.511910  151348 round_trippers.go:580]     Audit-Id: b3f383f0-7301-4be2-96bb-370f46686745
	I1004 01:16:54.511919  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:54.511932  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:54.511944  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:54.512179  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:55.009356  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:55.009386  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:55.009403  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:55.009412  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:55.012106  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:55.012122  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:55.012129  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:54 GMT
	I1004 01:16:55.012134  151348 round_trippers.go:580]     Audit-Id: d5f62d90-d773-44ab-b258-9d02d20bd4f9
	I1004 01:16:55.012139  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:55.012196  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:55.012210  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:55.012219  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:55.012395  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:55.509029  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:55.509055  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:55.509067  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:55.509076  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:55.511974  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:55.511995  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:55.512001  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:55.512007  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:55.512012  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:55 GMT
	I1004 01:16:55.512017  151348 round_trippers.go:580]     Audit-Id: aaf12a78-3a1b-47a5-8a6d-99edf619ad3e
	I1004 01:16:55.512024  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:55.512032  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:55.512659  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:56.009301  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:56.009327  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:56.009338  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:56.009347  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:56.012142  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:56.012165  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:56.012176  151348 round_trippers.go:580]     Audit-Id: 038dde4f-828e-4a59-a108-017fdfce28e4
	I1004 01:16:56.012185  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:56.012194  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:56.012203  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:56.012211  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:56.012216  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:55 GMT
	I1004 01:16:56.012688  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:56.013085  151348 node_ready.go:58] node "multinode-038823" has status "Ready":"False"
	I1004 01:16:56.509444  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:56.509474  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:56.509488  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:56.509497  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:56.512229  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:56.512253  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:56.512263  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:56.512273  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:56.512282  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:56.512291  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:56 GMT
	I1004 01:16:56.512337  151348 round_trippers.go:580]     Audit-Id: 6cf6ab35-dd87-4d04-930c-081555abe5c9
	I1004 01:16:56.512383  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:56.512591  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"740","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I1004 01:16:57.009274  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:57.009313  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.009326  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.009337  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.011923  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:57.011944  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.011951  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.011961  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.011966  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:56 GMT
	I1004 01:16:57.011971  151348 round_trippers.go:580]     Audit-Id: 92e441b1-5270-4af9-b540-479f671ae868
	I1004 01:16:57.011976  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.011981  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.012380  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:57.012666  151348 node_ready.go:49] node "multinode-038823" has status "Ready":"True"
	I1004 01:16:57.012680  151348 node_ready.go:38] duration metric: took 5.780650656s waiting for node "multinode-038823" to be "Ready" ...
	I1004 01:16:57.012689  151348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:16:57.012756  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:16:57.012764  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.012771  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.012777  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.019438  151348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 01:16:57.019457  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.019464  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.019469  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.019475  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.019480  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.019490  151348 round_trippers.go:580]     Audit-Id: 0fbc3353-61c4-4ded-8571-14faa4e09788
	I1004 01:16:57.019498  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.020660  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"873"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82310 chars]
	I1004 01:16:57.023049  151348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:16:57.023129  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:57.023137  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.023144  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.023150  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.025247  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:57.025262  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.025268  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.025273  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.025278  151348 round_trippers.go:580]     Audit-Id: 61518db3-ba78-4854-acac-bf73e36c15c1
	I1004 01:16:57.025283  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.025288  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.025294  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.025671  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:57.026085  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:57.026099  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.026106  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.026112  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.028091  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:16:57.028108  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.028115  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.028120  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.028125  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.028130  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.028135  151348 round_trippers.go:580]     Audit-Id: 2395144e-1074-48cc-8e13-0db3b79860fb
	I1004 01:16:57.028140  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.028272  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:57.028649  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:57.028660  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.028668  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.028673  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.031786  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:57.031806  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.031814  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.031823  151348 round_trippers.go:580]     Audit-Id: 146ed4a5-4100-4bcb-b1bb-f8fdcb577f52
	I1004 01:16:57.031832  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.031840  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.031850  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.031860  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.031975  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:57.032395  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:57.032409  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.032416  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.032421  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.034353  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:16:57.034367  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.034373  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.034378  151348 round_trippers.go:580]     Audit-Id: a1c3b18a-f23e-48f7-a1c6-4e0754af06e1
	I1004 01:16:57.034383  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.034388  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.034393  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.034399  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.035053  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:57.536110  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:57.536134  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.536142  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.536148  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.539152  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:57.539172  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.539178  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.539184  151348 round_trippers.go:580]     Audit-Id: b715491a-0582-40b2-81b4-6e23a914cf18
	I1004 01:16:57.539189  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.539194  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.539199  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.539210  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.539703  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:57.540136  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:57.540148  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:57.540156  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:57.540162  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:57.542545  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:57.542559  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:57.542565  151348 round_trippers.go:580]     Audit-Id: 1cc510b5-3fb7-47ff-a53f-8e0439119e41
	I1004 01:16:57.542570  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:57.542575  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:57.542579  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:57.542584  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:57.542589  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:57 GMT
	I1004 01:16:57.542849  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:58.036017  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:58.036040  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:58.036048  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:58.036054  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:58.039543  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:58.039562  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:58.039569  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:58.039575  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:58.039580  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:58.039585  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:58.039590  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:58 GMT
	I1004 01:16:58.039595  151348 round_trippers.go:580]     Audit-Id: 3ee972f3-7189-4302-9e51-aaa7b5e591b1
	I1004 01:16:58.039898  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:58.040331  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:58.040344  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:58.040351  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:58.040357  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:58.042496  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:58.042509  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:58.042515  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:58.042520  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:58.042525  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:58.042530  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:58 GMT
	I1004 01:16:58.042534  151348 round_trippers.go:580]     Audit-Id: e02a2140-54cb-4876-a089-8679aa2c37a2
	I1004 01:16:58.042539  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:58.042862  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:58.535580  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:58.535605  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:58.535614  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:58.535620  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:58.538747  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:16:58.538772  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:58.538782  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:58.538791  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:58.538799  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:58.538807  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:58 GMT
	I1004 01:16:58.538816  151348 round_trippers.go:580]     Audit-Id: 32d9d667-e3be-4640-a001-72410608905a
	I1004 01:16:58.538827  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:58.539130  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:58.539594  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:58.539612  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:58.539620  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:58.539625  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:58.541901  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:58.541922  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:58.541933  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:58.541946  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:58.541955  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:58.541966  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:58 GMT
	I1004 01:16:58.541974  151348 round_trippers.go:580]     Audit-Id: e352b7a7-9d85-4dbf-8f58-eab0d3868b63
	I1004 01:16:58.541986  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:58.542235  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:59.036238  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:59.036268  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:59.036282  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:59.036291  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:59.039138  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:59.039160  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:59.039167  151348 round_trippers.go:580]     Audit-Id: e3ae8656-6688-4c72-b045-4444d5b67484
	I1004 01:16:59.039173  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:59.039178  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:59.039183  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:59.039188  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:59.039196  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:59 GMT
	I1004 01:16:59.039432  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:59.039918  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:59.039934  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:59.039941  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:59.039947  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:59.042737  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:59.042759  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:59.042770  151348 round_trippers.go:580]     Audit-Id: 4a2bd548-fab9-402a-a7e1-0e01217aa34f
	I1004 01:16:59.042778  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:59.042786  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:59.042794  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:59.042803  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:59.042815  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:59 GMT
	I1004 01:16:59.043538  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:16:59.043940  151348 pod_ready.go:102] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"False"
	I1004 01:16:59.536266  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:16:59.536297  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:59.536310  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:59.536320  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:59.538999  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:16:59.539024  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:59.539034  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:59.539043  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:59.539051  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:59.539058  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:59.539065  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:59 GMT
	I1004 01:16:59.539073  151348 round_trippers.go:580]     Audit-Id: 3283530f-e807-4fd1-bbc4-f8ba7215c4c4
	I1004 01:16:59.539667  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:16:59.540095  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:16:59.540107  151348 round_trippers.go:469] Request Headers:
	I1004 01:16:59.540114  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:16:59.540120  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:16:59.542091  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:16:59.542111  151348 round_trippers.go:577] Response Headers:
	I1004 01:16:59.542120  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:16:59.542128  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:16:59.542136  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:16:59.542145  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:16:59 GMT
	I1004 01:16:59.542154  151348 round_trippers.go:580]     Audit-Id: e1101b98-2274-414b-b22e-30f7f33444b4
	I1004 01:16:59.542163  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:16:59.542573  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:00.036422  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:00.036452  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:00.036464  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:00.036474  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:00.039557  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:00.039584  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:00.039594  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:00.039622  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:00.039635  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:00.039643  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:00.039654  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:00 GMT
	I1004 01:17:00.039664  151348 round_trippers.go:580]     Audit-Id: d4108e5c-d85a-4ad2-a82c-9fc37053f28e
	I1004 01:17:00.041726  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:00.042241  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:00.042257  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:00.042265  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:00.042272  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:00.044677  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:00.044701  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:00.044711  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:00.044724  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:00.044736  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:00.044745  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:00 GMT
	I1004 01:17:00.044755  151348 round_trippers.go:580]     Audit-Id: dd72444b-3f2d-428b-aa36-848508db414a
	I1004 01:17:00.044765  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:00.045203  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:00.535867  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:00.535894  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:00.535902  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:00.535908  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:00.538961  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:00.538987  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:00.538996  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:00.539004  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:00 GMT
	I1004 01:17:00.539012  151348 round_trippers.go:580]     Audit-Id: cb68c493-2cf8-49fa-9089-649706545733
	I1004 01:17:00.539019  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:00.539026  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:00.539034  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:00.539250  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:00.539830  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:00.539847  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:00.539858  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:00.539867  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:00.542001  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:00.542019  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:00.542030  151348 round_trippers.go:580]     Audit-Id: 57e45af5-2dde-4c62-bcf8-ce3ba7df192a
	I1004 01:17:00.542038  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:00.542047  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:00.542055  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:00.542062  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:00.542070  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:00 GMT
	I1004 01:17:00.542213  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:01.035900  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:01.035929  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:01.035937  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:01.035943  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:01.038681  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:01.038706  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:01.038716  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:01.038723  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:01.038730  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:01 GMT
	I1004 01:17:01.038737  151348 round_trippers.go:580]     Audit-Id: a254a64a-4177-428f-be28-ae5aa1768dea
	I1004 01:17:01.038746  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:01.038758  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:01.039330  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:01.039767  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:01.039783  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:01.039801  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:01.039810  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:01.042771  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:01.042793  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:01.042803  151348 round_trippers.go:580]     Audit-Id: 89da98aa-07f5-429a-9f01-7bb9ba3e0080
	I1004 01:17:01.042811  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:01.042821  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:01.042830  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:01.042842  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:01.042855  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:01 GMT
	I1004 01:17:01.043260  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:01.535910  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:01.535936  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:01.535947  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:01.535955  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:01.538834  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:01.538856  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:01.538863  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:01.538870  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:01 GMT
	I1004 01:17:01.538878  151348 round_trippers.go:580]     Audit-Id: de04e1f1-60bf-4ab0-9f53-272743212855
	I1004 01:17:01.538887  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:01.538900  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:01.538909  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:01.539117  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:01.539689  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:01.539707  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:01.539719  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:01.539727  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:01.542096  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:01.542114  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:01.542123  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:01.542131  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:01.542140  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:01.542149  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:01 GMT
	I1004 01:17:01.542166  151348 round_trippers.go:580]     Audit-Id: ec486269-ec11-4044-a324-a615b49c8fac
	I1004 01:17:01.542178  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:01.542419  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:01.542723  151348 pod_ready.go:102] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"False"
	I1004 01:17:02.036140  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:02.036166  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:02.036174  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:02.036180  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:02.038908  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:02.038929  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:02.038939  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:02.038947  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:02.038954  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:02.038961  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:02.038968  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:02 GMT
	I1004 01:17:02.038975  151348 round_trippers.go:580]     Audit-Id: 41fcfcf0-410e-44ba-aa31-ffa37a37aafe
	I1004 01:17:02.039687  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:02.040145  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:02.040159  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:02.040166  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:02.040174  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:02.042565  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:02.042579  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:02.042586  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:02.042591  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:02.042597  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:02 GMT
	I1004 01:17:02.042604  151348 round_trippers.go:580]     Audit-Id: db8d4af3-e5ce-4b59-95a8-ba87132ed1a2
	I1004 01:17:02.042613  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:02.042629  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:02.042782  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:02.536546  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:02.536573  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:02.536582  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:02.536588  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:02.540435  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:02.540459  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:02.540470  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:02.540477  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:02.540484  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:02.540491  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:02.540498  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:02 GMT
	I1004 01:17:02.540506  151348 round_trippers.go:580]     Audit-Id: f3a09f0d-470c-48e8-b6d0-0d982463c870
	I1004 01:17:02.540706  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:02.541191  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:02.541206  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:02.541215  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:02.541224  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:02.543704  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:02.543722  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:02.543731  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:02 GMT
	I1004 01:17:02.543739  151348 round_trippers.go:580]     Audit-Id: b32c7612-c3d4-4bda-aa7f-c00ac81d0d76
	I1004 01:17:02.543747  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:02.543756  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:02.543771  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:02.543780  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:02.544344  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:03.036116  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:03.036140  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:03.036148  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:03.036154  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:03.039258  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:03.039276  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:03.039282  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:03 GMT
	I1004 01:17:03.039288  151348 round_trippers.go:580]     Audit-Id: d0f4c6af-3ffa-4ff1-b34d-314ff95fb202
	I1004 01:17:03.039293  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:03.039297  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:03.039302  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:03.039307  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:03.040980  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:03.041426  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:03.041440  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:03.041447  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:03.041453  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:03.043976  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:03.043989  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:03.043994  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:03 GMT
	I1004 01:17:03.043999  151348 round_trippers.go:580]     Audit-Id: 3a2d1a65-7cae-4086-b93f-4625b35d3fd7
	I1004 01:17:03.044005  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:03.044009  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:03.044016  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:03.044024  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:03.044321  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:03.535662  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:03.535685  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:03.535693  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:03.535699  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:03.539137  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:03.539157  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:03.539164  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:03 GMT
	I1004 01:17:03.539169  151348 round_trippers.go:580]     Audit-Id: 9d49daee-2027-4ef2-b2a6-60d5d908cedc
	I1004 01:17:03.539174  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:03.539180  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:03.539187  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:03.539195  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:03.539811  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:03.540283  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:03.540297  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:03.540306  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:03.540316  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:03.543181  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:03.543201  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:03.543213  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:03.543223  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:03.543233  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:03.543240  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:03.543252  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:03 GMT
	I1004 01:17:03.543258  151348 round_trippers.go:580]     Audit-Id: 2e476aea-5388-4441-8a04-92336875771d
	I1004 01:17:03.543616  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:03.544112  151348 pod_ready.go:102] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"False"
	I1004 01:17:04.036350  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:04.036377  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.036385  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.036391  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.039326  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.039349  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.039356  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.039361  151348 round_trippers.go:580]     Audit-Id: e69b0ff1-61d8-43cb-8936-7258be164b99
	I1004 01:17:04.039366  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.039371  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.039377  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.039382  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.039555  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"766","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I1004 01:17:04.039987  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:04.040000  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.040007  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.040013  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.042214  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.042229  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.042239  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.042248  151348 round_trippers.go:580]     Audit-Id: f90e2b60-2f2b-4679-81ae-3fd7c391e6bf
	I1004 01:17:04.042258  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.042270  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.042277  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.042286  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.042411  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:04.536140  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:17:04.536163  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.536172  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.536179  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.539207  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:04.539226  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.539232  151348 round_trippers.go:580]     Audit-Id: c8cbc439-ca39-4a14-8b6d-9d73e2b3b41e
	I1004 01:17:04.539238  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.539246  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.539254  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.539263  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.539275  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.539444  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1004 01:17:04.539891  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:04.539904  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.539911  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.539917  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.542019  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.542036  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.542047  151348 round_trippers.go:580]     Audit-Id: db96e441-c866-4b3a-867d-46b279d3c6ef
	I1004 01:17:04.542057  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.542063  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.542070  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.542078  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.542087  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.542236  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:04.542625  151348 pod_ready.go:92] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:04.542643  151348 pod_ready.go:81] duration metric: took 7.519564466s waiting for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.542655  151348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.542732  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-038823
	I1004 01:17:04.542745  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.542756  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.542769  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.545076  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.545093  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.545099  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.545104  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.545111  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.545119  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.545127  151348 round_trippers.go:580]     Audit-Id: 40461ce0-cb40-4dcd-b49b-9fe07adad352
	I1004 01:17:04.545134  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.545417  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"865","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1004 01:17:04.545880  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:04.545895  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.545905  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.545915  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.549312  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:04.549326  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.549332  151348 round_trippers.go:580]     Audit-Id: 543cbc66-d87f-4b45-9a79-3732f1181847
	I1004 01:17:04.549339  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.549347  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.549357  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.549366  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.549377  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.549490  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:04.549918  151348 pod_ready.go:92] pod "etcd-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:04.549937  151348 pod_ready.go:81] duration metric: took 7.269235ms waiting for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.549959  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.550015  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-038823
	I1004 01:17:04.550030  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.550041  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.550050  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.556061  151348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 01:17:04.556077  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.556083  151348 round_trippers.go:580]     Audit-Id: 54f99796-f92d-40d6-b910-cbbcb40bd046
	I1004 01:17:04.556089  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.556094  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.556098  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.556103  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.556109  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.556828  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-038823","namespace":"kube-system","uid":"8f46d14f-fac3-4029-af40-ad242d6e93e1","resourceVersion":"876","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.mirror":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.seen":"2023-10-04T01:06:24.071714521Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1004 01:17:04.557162  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:04.557173  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.557179  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.557185  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.561046  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:04.561061  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.561067  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.561072  151348 round_trippers.go:580]     Audit-Id: b4f9e261-bee9-4208-a4cb-c1df111aaff2
	I1004 01:17:04.561078  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.561084  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.561092  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.561102  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.562453  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:04.562787  151348 pod_ready.go:92] pod "kube-apiserver-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:04.562803  151348 pod_ready.go:81] duration metric: took 12.83375ms waiting for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.562812  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.562867  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-038823
	I1004 01:17:04.562875  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.562883  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.562890  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.565221  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.565237  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.565244  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.565251  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.565256  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.565261  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.565266  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.565272  151348 round_trippers.go:580]     Audit-Id: 2b87c7e2-0455-43b4-b4d5-3aa3506bbddb
	I1004 01:17:04.565421  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-038823","namespace":"kube-system","uid":"ace8ff54-191a-4969-bc58-ad0440f25084","resourceVersion":"816","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.mirror":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.seen":"2023-10-04T01:06:24.071715949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1004 01:17:04.565802  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:04.565815  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.565823  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.565829  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.568047  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.568062  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.568070  151348 round_trippers.go:580]     Audit-Id: 2c62c395-d6f4-4726-ae5b-2aac6819477d
	I1004 01:17:04.568078  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.568086  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.568101  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.568110  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.568120  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.568368  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:04.568637  151348 pod_ready.go:92] pod "kube-controller-manager-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:04.568650  151348 pod_ready.go:81] duration metric: took 5.832663ms waiting for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.568660  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.568700  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:17:04.568708  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.568714  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.568720  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.571375  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.571394  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.571400  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.571406  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.571412  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.571424  151348 round_trippers.go:580]     Audit-Id: 0089b800-da19-484c-a793-205ecec68d28
	I1004 01:17:04.571432  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.571441  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.571791  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"505","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1004 01:17:04.572166  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:17:04.572180  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.572187  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.572193  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.574613  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.574633  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.574639  151348 round_trippers.go:580]     Audit-Id: f087781c-dd6f-4834-92f3-1d5b810322ae
	I1004 01:17:04.574645  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.574650  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.574658  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.574663  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.574668  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.574955  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef","resourceVersion":"758","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3684 chars]
	I1004 01:17:04.575156  151348 pod_ready.go:92] pod "kube-proxy-hgg2z" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:04.575167  151348 pod_ready.go:81] duration metric: took 6.502923ms waiting for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.575176  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.736742  151348 request.go:629] Waited for 161.477012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:17:04.736812  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:17:04.736817  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.736825  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.736831  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.739558  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:04.739584  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.739595  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.739609  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.739616  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.739624  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.739637  151348 round_trippers.go:580]     Audit-Id: ea0cf51c-72b5-437e-b1bb-a6d412948947
	I1004 01:17:04.739646  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.740078  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psqss","generateName":"kube-proxy-","namespace":"kube-system","uid":"455f6f13-5661-4b4e-847b-9266e44c03d8","resourceVersion":"712","creationTimestamp":"2023-10-04T01:08:09Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1004 01:17:04.936345  151348 request.go:629] Waited for 195.851391ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:17:04.936434  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:17:04.936441  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:04.936451  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:04.936460  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:04.940387  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:04.940415  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:04.940426  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:04.940435  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:04 GMT
	I1004 01:17:04.940443  151348 round_trippers.go:580]     Audit-Id: d91573cc-09dd-4668-9c0b-f03783490c57
	I1004 01:17:04.940450  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:04.940459  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:04.940469  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:04.941197  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m03","uid":"aecf3685-48bc-4468-b845-c7c671e5cd13","resourceVersion":"792","creationTimestamp":"2023-10-04T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I1004 01:17:04.941533  151348 pod_ready.go:92] pod "kube-proxy-psqss" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:04.941550  151348 pod_ready.go:81] duration metric: took 366.367724ms waiting for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:04.941565  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:05.137095  151348 request.go:629] Waited for 195.418043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:17:05.137172  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:17:05.137184  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:05.137198  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:05.137211  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:05.140489  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:17:05.140505  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:05.140512  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:05.140518  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:05.140523  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:05.140530  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:05 GMT
	I1004 01:17:05.140539  151348 round_trippers.go:580]     Audit-Id: 3ea57907-9f4c-4843-980c-1594e07f2ca2
	I1004 01:17:05.140547  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:05.140932  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pz9j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"36f00e2f-5611-43ae-94b5-d9dde6784128","resourceVersion":"791","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1004 01:17:05.336882  151348 request.go:629] Waited for 195.387587ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:05.336966  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:05.336978  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:05.336989  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:05.336998  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:05.341403  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:17:05.341432  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:05.341443  151348 round_trippers.go:580]     Audit-Id: 91b0d32a-bec8-4bbb-bc82-a967b703f884
	I1004 01:17:05.341452  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:05.341460  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:05.341467  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:05.341476  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:05.341484  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:05 GMT
	I1004 01:17:05.341664  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:05.342030  151348 pod_ready.go:92] pod "kube-proxy-pz9j4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:05.342046  151348 pod_ready.go:81] duration metric: took 400.470358ms waiting for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:05.342055  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:05.536588  151348 request.go:629] Waited for 194.45645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:17:05.536684  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:17:05.536690  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:05.536698  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:05.536708  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:05.539651  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:05.539682  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:05.539692  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:05 GMT
	I1004 01:17:05.539697  151348 round_trippers.go:580]     Audit-Id: fecee4ca-b277-4a86-9190-055575400c7c
	I1004 01:17:05.539706  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:05.539712  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:05.539717  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:05.539723  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:05.539823  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-038823","namespace":"kube-system","uid":"2da95c67-ae74-41db-a746-455fa043f9a7","resourceVersion":"889","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.mirror":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.seen":"2023-10-04T01:06:24.071717021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1004 01:17:05.736668  151348 request.go:629] Waited for 196.452674ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:05.736752  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:17:05.736759  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:05.736770  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:05.736778  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:05.739777  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:05.739803  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:05.739810  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:05.739816  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:05.739821  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:05.739826  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:05 GMT
	I1004 01:17:05.739831  151348 round_trippers.go:580]     Audit-Id: 0e3a7716-c806-445b-a562-b409e19f45c5
	I1004 01:17:05.739836  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:05.740519  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I1004 01:17:05.740827  151348 pod_ready.go:92] pod "kube-scheduler-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:17:05.740840  151348 pod_ready.go:81] duration metric: took 398.779206ms waiting for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:17:05.740855  151348 pod_ready.go:38] duration metric: took 8.728155382s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:17:05.740871  151348 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:17:05.740917  151348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:17:05.754874  151348 command_runner.go:130] > 1119
	I1004 01:17:05.754925  151348 api_server.go:72] duration metric: took 14.624135211s to wait for apiserver process to appear ...
	I1004 01:17:05.754937  151348 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:17:05.754958  151348 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:17:05.760534  151348 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1004 01:17:05.760599  151348 round_trippers.go:463] GET https://192.168.39.212:8443/version
	I1004 01:17:05.760603  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:05.760611  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:05.760617  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:05.761756  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:17:05.761774  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:05.761782  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:05 GMT
	I1004 01:17:05.761796  151348 round_trippers.go:580]     Audit-Id: b8958de2-fb3e-436c-bb63-2b72b4396245
	I1004 01:17:05.761804  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:05.761812  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:05.761818  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:05.761827  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:05.761834  151348 round_trippers.go:580]     Content-Length: 263
	I1004 01:17:05.761895  151348 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1004 01:17:05.761955  151348 api_server.go:141] control plane version: v1.28.2
	I1004 01:17:05.761971  151348 api_server.go:131] duration metric: took 7.02643ms to wait for apiserver health ...
	I1004 01:17:05.761980  151348 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:17:05.936326  151348 request.go:629] Waited for 174.264225ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:17:05.936404  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:17:05.936415  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:05.936423  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:05.936433  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:05.940641  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:17:05.940664  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:05.940673  151348 round_trippers.go:580]     Audit-Id: 2549ca9d-3aba-44f4-9b13-951dd5f5f1f7
	I1004 01:17:05.940682  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:05.940688  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:05.940697  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:05.940705  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:05.940715  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:05 GMT
	I1004 01:17:05.942356  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"904"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81717 chars]
	I1004 01:17:05.946025  151348 system_pods.go:59] 12 kube-system pods found
	I1004 01:17:05.946059  151348 system_pods.go:61] "coredns-5dd5756b68-xbln6" [956d98ac-25cb-4d19-a9c7-c3a9682eff67] Running
	I1004 01:17:05.946067  151348 system_pods.go:61] "etcd-multinode-038823" [040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13] Running
	I1004 01:17:05.946074  151348 system_pods.go:61] "kindnet-cqczw" [e511e913-b479-4024-9942-72775656744a] Running
	I1004 01:17:05.946080  151348 system_pods.go:61] "kindnet-prsst" [1775280f-c3e2-4162-9287-9b58a90c8f83] Running
	I1004 01:17:05.946090  151348 system_pods.go:61] "kindnet-zg29t" [94d43c66-bea3-44a0-bbf2-85a553e012b0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1004 01:17:05.946100  151348 system_pods.go:61] "kube-apiserver-multinode-038823" [8f46d14f-fac3-4029-af40-ad242d6e93e1] Running
	I1004 01:17:05.946108  151348 system_pods.go:61] "kube-controller-manager-multinode-038823" [ace8ff54-191a-4969-bc58-ad0440f25084] Running
	I1004 01:17:05.946115  151348 system_pods.go:61] "kube-proxy-hgg2z" [28d3f9c9-4eb8-4c36-81b0-1726a87d20a6] Running
	I1004 01:17:05.946120  151348 system_pods.go:61] "kube-proxy-psqss" [455f6f13-5661-4b4e-847b-9266e44c03d8] Running
	I1004 01:17:05.946126  151348 system_pods.go:61] "kube-proxy-pz9j4" [36f00e2f-5611-43ae-94b5-d9dde6784128] Running
	I1004 01:17:05.946134  151348 system_pods.go:61] "kube-scheduler-multinode-038823" [2da95c67-ae74-41db-a746-455fa043f9a7] Running
	I1004 01:17:05.946145  151348 system_pods.go:61] "storage-provisioner" [b4bd2f00-0b17-47da-add0-486f8232ea80] Running
	I1004 01:17:05.946156  151348 system_pods.go:74] duration metric: took 184.169453ms to wait for pod list to return data ...
	I1004 01:17:05.946166  151348 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:17:06.136606  151348 request.go:629] Waited for 190.350131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1004 01:17:06.136692  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/default/serviceaccounts
	I1004 01:17:06.136698  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:06.136706  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:06.136713  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:06.139572  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:06.139596  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:06.139607  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:06.139616  151348 round_trippers.go:580]     Content-Length: 261
	I1004 01:17:06.139630  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:06 GMT
	I1004 01:17:06.139635  151348 round_trippers.go:580]     Audit-Id: 06ed5fdc-ae17-43dc-a295-473f4bf8e43e
	I1004 01:17:06.139640  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:06.139647  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:06.139652  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:06.139675  151348 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"cc30e8ea-fc59-44b4-adeb-db7afac19015","resourceVersion":"335","creationTimestamp":"2023-10-04T01:06:36Z"}}]}
	I1004 01:17:06.139842  151348 default_sa.go:45] found service account: "default"
	I1004 01:17:06.139856  151348 default_sa.go:55] duration metric: took 193.677635ms for default service account to be created ...
	I1004 01:17:06.139864  151348 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:17:06.336220  151348 request.go:629] Waited for 196.294791ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:17:06.336287  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:17:06.336293  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:06.336301  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:06.336307  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:06.340646  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:17:06.340668  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:06.340675  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:06.340681  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:06 GMT
	I1004 01:17:06.340686  151348 round_trippers.go:580]     Audit-Id: 90eed5f8-6963-4ea1-ac59-6eb6995dcae0
	I1004 01:17:06.340691  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:06.340701  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:06.340706  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:06.342234  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81717 chars]
	I1004 01:17:06.344627  151348 system_pods.go:86] 12 kube-system pods found
	I1004 01:17:06.344649  151348 system_pods.go:89] "coredns-5dd5756b68-xbln6" [956d98ac-25cb-4d19-a9c7-c3a9682eff67] Running
	I1004 01:17:06.344654  151348 system_pods.go:89] "etcd-multinode-038823" [040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13] Running
	I1004 01:17:06.344660  151348 system_pods.go:89] "kindnet-cqczw" [e511e913-b479-4024-9942-72775656744a] Running
	I1004 01:17:06.344665  151348 system_pods.go:89] "kindnet-prsst" [1775280f-c3e2-4162-9287-9b58a90c8f83] Running
	I1004 01:17:06.344672  151348 system_pods.go:89] "kindnet-zg29t" [94d43c66-bea3-44a0-bbf2-85a553e012b0] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1004 01:17:06.344678  151348 system_pods.go:89] "kube-apiserver-multinode-038823" [8f46d14f-fac3-4029-af40-ad242d6e93e1] Running
	I1004 01:17:06.344684  151348 system_pods.go:89] "kube-controller-manager-multinode-038823" [ace8ff54-191a-4969-bc58-ad0440f25084] Running
	I1004 01:17:06.344688  151348 system_pods.go:89] "kube-proxy-hgg2z" [28d3f9c9-4eb8-4c36-81b0-1726a87d20a6] Running
	I1004 01:17:06.344692  151348 system_pods.go:89] "kube-proxy-psqss" [455f6f13-5661-4b4e-847b-9266e44c03d8] Running
	I1004 01:17:06.344696  151348 system_pods.go:89] "kube-proxy-pz9j4" [36f00e2f-5611-43ae-94b5-d9dde6784128] Running
	I1004 01:17:06.344700  151348 system_pods.go:89] "kube-scheduler-multinode-038823" [2da95c67-ae74-41db-a746-455fa043f9a7] Running
	I1004 01:17:06.344704  151348 system_pods.go:89] "storage-provisioner" [b4bd2f00-0b17-47da-add0-486f8232ea80] Running
	I1004 01:17:06.344710  151348 system_pods.go:126] duration metric: took 204.841958ms to wait for k8s-apps to be running ...
	I1004 01:17:06.344722  151348 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:17:06.344765  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:17:06.358356  151348 system_svc.go:56] duration metric: took 13.624695ms WaitForService to wait for kubelet.
	I1004 01:17:06.358378  151348 kubeadm.go:581] duration metric: took 15.227590987s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:17:06.358396  151348 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:17:06.536830  151348 request.go:629] Waited for 178.347651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1004 01:17:06.536903  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1004 01:17:06.536911  151348 round_trippers.go:469] Request Headers:
	I1004 01:17:06.536922  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:17:06.536933  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:17:06.539827  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:17:06.539849  151348 round_trippers.go:577] Response Headers:
	I1004 01:17:06.539856  151348 round_trippers.go:580]     Audit-Id: dab0ad58-5e69-41e3-bf0c-420415e716c5
	I1004 01:17:06.539861  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:17:06.539866  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:17:06.539871  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:17:06.539876  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:17:06.539881  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:17:06 GMT
	I1004 01:17:06.540205  151348 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"907"},"items":[{"metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"873","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15075 chars]
	I1004 01:17:06.540830  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:17:06.540856  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:17:06.540867  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:17:06.540871  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:17:06.540875  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:17:06.540878  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:17:06.540882  151348 node_conditions.go:105] duration metric: took 182.480502ms to run NodePressure ...
	I1004 01:17:06.540894  151348 start.go:228] waiting for startup goroutines ...
	I1004 01:17:06.540902  151348 start.go:233] waiting for cluster config update ...
	I1004 01:17:06.540912  151348 start.go:242] writing updated cluster config ...
	I1004 01:17:06.541361  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:17:06.541456  151348 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:17:06.543937  151348 out.go:177] * Starting worker node multinode-038823-m02 in cluster multinode-038823
	I1004 01:17:06.545227  151348 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:17:06.545253  151348 cache.go:57] Caching tarball of preloaded images
	I1004 01:17:06.545367  151348 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:17:06.545380  151348 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:17:06.545466  151348 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:17:06.545640  151348 start.go:365] acquiring machines lock for multinode-038823-m02: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:17:06.545686  151348 start.go:369] acquired machines lock for "multinode-038823-m02" in 27.218µs
	I1004 01:17:06.545702  151348 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:17:06.545710  151348 fix.go:54] fixHost starting: m02
	I1004 01:17:06.546011  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:17:06.546047  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:17:06.560809  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36719
	I1004 01:17:06.561226  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:17:06.561761  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:17:06.561787  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:17:06.562103  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:17:06.562331  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:17:06.562488  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetState
	I1004 01:17:06.564241  151348 fix.go:102] recreateIfNeeded on multinode-038823-m02: state=Running err=<nil>
	W1004 01:17:06.564258  151348 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:17:06.566139  151348 out.go:177] * Updating the running kvm2 "multinode-038823-m02" VM ...
	I1004 01:17:06.567391  151348 machine.go:88] provisioning docker machine ...
	I1004 01:17:06.567409  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:17:06.567628  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:17:06.567800  151348 buildroot.go:166] provisioning hostname "multinode-038823-m02"
	I1004 01:17:06.567817  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:17:06.567944  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:17:06.570735  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.571162  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:17:06.571201  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.571374  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:17:06.571572  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:06.571752  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:06.571927  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:17:06.572107  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:17:06.572432  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:17:06.572446  151348 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-038823-m02 && echo "multinode-038823-m02" | sudo tee /etc/hostname
	I1004 01:17:06.701662  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-038823-m02
	
	I1004 01:17:06.701694  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:17:06.704701  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.705078  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:17:06.705116  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.705340  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:17:06.705557  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:06.705806  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:06.706000  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:17:06.706167  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:17:06.706475  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:17:06.706510  151348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-038823-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-038823-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-038823-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:17:06.831141  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:17:06.831177  151348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:17:06.831205  151348 buildroot.go:174] setting up certificates
	I1004 01:17:06.831216  151348 provision.go:83] configureAuth start
	I1004 01:17:06.831231  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetMachineName
	I1004 01:17:06.831564  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:17:06.834491  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.834891  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:17:06.834933  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.835142  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:17:06.837391  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.837713  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:17:06.837736  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:06.837904  151348 provision.go:138] copyHostCerts
	I1004 01:17:06.837936  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:17:06.837966  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:17:06.837974  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:17:06.838034  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:17:06.838102  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:17:06.838119  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:17:06.838126  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:17:06.838148  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:17:06.838190  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:17:06.838206  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:17:06.838213  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:17:06.838234  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:17:06.838281  151348 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.multinode-038823-m02 san=[192.168.39.181 192.168.39.181 localhost 127.0.0.1 minikube multinode-038823-m02]
	I1004 01:17:07.070547  151348 provision.go:172] copyRemoteCerts
	I1004 01:17:07.070653  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:17:07.070678  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:17:07.073200  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:07.073594  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:17:07.073629  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:07.073821  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:17:07.074068  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:07.074226  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:17:07.074387  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:17:07.159965  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 01:17:07.160037  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:17:07.182853  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 01:17:07.182916  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1004 01:17:07.207115  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 01:17:07.207184  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 01:17:07.232066  151348 provision.go:86] duration metric: configureAuth took 400.831176ms
	I1004 01:17:07.232098  151348 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:17:07.232319  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:17:07.232400  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:17:07.235394  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:07.235840  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:17:07.235874  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:17:07.236097  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:17:07.236304  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:07.236485  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:17:07.236649  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:17:07.236847  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:17:07.237160  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:17:07.237184  151348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:18:37.845239  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:18:37.845266  151348 machine.go:91] provisioned docker machine in 1m31.277862468s
	I1004 01:18:37.845276  151348 start.go:300] post-start starting for "multinode-038823-m02" (driver="kvm2")
	I1004 01:18:37.845286  151348 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:18:37.845349  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:18:37.845705  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:18:37.845731  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:18:37.848762  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:37.849261  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:18:37.849287  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:37.849510  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:18:37.849730  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:18:37.849908  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:18:37.850044  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:18:37.935620  151348 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:18:37.939918  151348 command_runner.go:130] > NAME=Buildroot
	I1004 01:18:37.939944  151348 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1004 01:18:37.939951  151348 command_runner.go:130] > ID=buildroot
	I1004 01:18:37.939958  151348 command_runner.go:130] > VERSION_ID=2021.02.12
	I1004 01:18:37.939965  151348 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1004 01:18:37.940038  151348 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:18:37.940065  151348 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:18:37.940143  151348 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:18:37.940236  151348 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:18:37.940251  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /etc/ssl/certs/1355652.pem
	I1004 01:18:37.940350  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:18:37.948919  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:18:37.973584  151348 start.go:303] post-start completed in 128.284759ms
	I1004 01:18:37.973614  151348 fix.go:56] fixHost completed within 1m31.427902321s
	I1004 01:18:37.973656  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:18:37.976572  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:37.976957  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:18:37.977007  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:37.977120  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:18:37.977341  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:18:37.977542  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:18:37.977707  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:18:37.977883  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:18:37.978365  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.181 22 <nil> <nil>}
	I1004 01:18:37.978382  151348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:18:38.086641  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696382318.080319670
	
	I1004 01:18:38.086665  151348 fix.go:206] guest clock: 1696382318.080319670
	I1004 01:18:38.086672  151348 fix.go:219] Guest: 2023-10-04 01:18:38.08031967 +0000 UTC Remote: 2023-10-04 01:18:37.973619319 +0000 UTC m=+453.297714401 (delta=106.700351ms)
	I1004 01:18:38.086689  151348 fix.go:190] guest clock delta is within tolerance: 106.700351ms
	I1004 01:18:38.086696  151348 start.go:83] releasing machines lock for "multinode-038823-m02", held for 1m31.540997607s
	I1004 01:18:38.086746  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:18:38.087001  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:18:38.089478  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:38.089824  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:18:38.089890  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:38.091888  151348 out.go:177] * Found network options:
	I1004 01:18:38.093366  151348 out.go:177]   - NO_PROXY=192.168.39.212
	W1004 01:18:38.094734  151348 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 01:18:38.094780  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:18:38.095363  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:18:38.095566  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:18:38.095663  151348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:18:38.095705  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	W1004 01:18:38.095797  151348 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 01:18:38.095870  151348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:18:38.095897  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:18:38.098410  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:38.098763  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:38.098801  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:18:38.098828  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:38.098953  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:18:38.099159  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:18:38.099216  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:18:38.099248  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:38.099327  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:18:38.099405  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:18:38.099486  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:18:38.099595  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:18:38.099732  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:18:38.099871  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:18:38.329919  151348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 01:18:38.329917  151348 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 01:18:38.336356  151348 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 01:18:38.336399  151348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:18:38.336451  151348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:18:38.347094  151348 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 01:18:38.347122  151348 start.go:469] detecting cgroup driver to use...
	I1004 01:18:38.347190  151348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:18:38.362050  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:18:38.375400  151348 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:18:38.375471  151348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:18:38.388958  151348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:18:38.403916  151348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:18:38.549729  151348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:18:38.687823  151348 docker.go:213] disabling docker service ...
	I1004 01:18:38.687898  151348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:18:38.704747  151348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:18:38.718532  151348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:18:38.876320  151348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:18:39.008601  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:18:39.021097  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:18:39.040983  151348 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 01:18:39.041039  151348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:18:39.041100  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:18:39.051688  151348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:18:39.051754  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:18:39.061771  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:18:39.071955  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:18:39.081212  151348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:18:39.091112  151348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:18:39.099542  151348 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1004 01:18:39.099647  151348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:18:39.108189  151348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:18:39.221622  151348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:18:39.476706  151348 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:18:39.476771  151348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:18:39.482324  151348 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 01:18:39.482352  151348 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 01:18:39.482362  151348 command_runner.go:130] > Device: 16h/22d	Inode: 1195        Links: 1
	I1004 01:18:39.482374  151348 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:18:39.482383  151348 command_runner.go:130] > Access: 2023-10-04 01:18:39.390649454 +0000
	I1004 01:18:39.482392  151348 command_runner.go:130] > Modify: 2023-10-04 01:18:39.390649454 +0000
	I1004 01:18:39.482404  151348 command_runner.go:130] > Change: 2023-10-04 01:18:39.390649454 +0000
	I1004 01:18:39.482414  151348 command_runner.go:130] >  Birth: -
	I1004 01:18:39.482432  151348 start.go:537] Will wait 60s for crictl version
	I1004 01:18:39.482485  151348 ssh_runner.go:195] Run: which crictl
	I1004 01:18:39.486866  151348 command_runner.go:130] > /usr/bin/crictl
	I1004 01:18:39.486923  151348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:18:39.530568  151348 command_runner.go:130] > Version:  0.1.0
	I1004 01:18:39.530600  151348 command_runner.go:130] > RuntimeName:  cri-o
	I1004 01:18:39.530609  151348 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1004 01:18:39.530634  151348 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 01:18:39.531958  151348 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:18:39.532022  151348 ssh_runner.go:195] Run: crio --version
	I1004 01:18:39.579286  151348 command_runner.go:130] > crio version 1.24.1
	I1004 01:18:39.579319  151348 command_runner.go:130] > Version:          1.24.1
	I1004 01:18:39.579329  151348 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:18:39.579335  151348 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:18:39.579347  151348 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:18:39.579363  151348 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:18:39.579378  151348 command_runner.go:130] > Compiler:         gc
	I1004 01:18:39.579386  151348 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:18:39.579398  151348 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:18:39.579413  151348 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:18:39.579419  151348 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:18:39.579424  151348 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:18:39.580763  151348 ssh_runner.go:195] Run: crio --version
	I1004 01:18:39.642778  151348 command_runner.go:130] > crio version 1.24.1
	I1004 01:18:39.642804  151348 command_runner.go:130] > Version:          1.24.1
	I1004 01:18:39.642816  151348 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:18:39.642823  151348 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:18:39.642837  151348 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:18:39.642845  151348 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:18:39.642852  151348 command_runner.go:130] > Compiler:         gc
	I1004 01:18:39.642859  151348 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:18:39.642876  151348 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:18:39.642890  151348 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:18:39.642900  151348 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:18:39.642906  151348 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:18:39.646294  151348 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:18:39.647798  151348 out.go:177]   - env NO_PROXY=192.168.39.212
	I1004 01:18:39.649349  151348 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:18:39.651864  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:39.652263  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:18:39.652308  151348 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:18:39.652569  151348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 01:18:39.658106  151348 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1004 01:18:39.658300  151348 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823 for IP: 192.168.39.181
	I1004 01:18:39.658328  151348 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:18:39.658516  151348 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:18:39.658585  151348 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:18:39.658606  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 01:18:39.658623  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 01:18:39.658636  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 01:18:39.658648  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 01:18:39.658705  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:18:39.658736  151348 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:18:39.658751  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:18:39.658774  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:18:39.658802  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:18:39.658827  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:18:39.658865  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:18:39.658894  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:18:39.658909  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem -> /usr/share/ca-certificates/135565.pem
	I1004 01:18:39.658921  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /usr/share/ca-certificates/1355652.pem
	I1004 01:18:39.659275  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:18:39.685180  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:18:39.710670  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:18:39.737852  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:18:39.768443  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:18:39.791459  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:18:39.816785  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:18:39.841956  151348 ssh_runner.go:195] Run: openssl version
	I1004 01:18:39.848026  151348 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1004 01:18:39.848090  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:18:39.859891  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:18:39.864918  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:18:39.865249  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:18:39.865298  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:18:39.870704  151348 command_runner.go:130] > 51391683
	I1004 01:18:39.870948  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:18:39.881529  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:18:39.891903  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:18:39.896451  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:18:39.896564  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:18:39.896616  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:18:39.901952  151348 command_runner.go:130] > 3ec20f2e
	I1004 01:18:39.902231  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:18:39.911506  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:18:39.922180  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:18:39.926957  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:18:39.927024  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:18:39.927090  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:18:39.932820  151348 command_runner.go:130] > b5213941
	I1004 01:18:39.932897  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:18:39.942185  151348 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:18:39.946337  151348 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:18:39.946523  151348 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:18:39.946622  151348 ssh_runner.go:195] Run: crio config
	I1004 01:18:39.996474  151348 command_runner.go:130] ! time="2023-10-04 01:18:39.990059208Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1004 01:18:39.996520  151348 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 01:18:40.001326  151348 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 01:18:40.001354  151348 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 01:18:40.001364  151348 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 01:18:40.001369  151348 command_runner.go:130] > #
	I1004 01:18:40.001379  151348 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 01:18:40.001390  151348 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 01:18:40.001413  151348 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 01:18:40.001435  151348 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 01:18:40.001445  151348 command_runner.go:130] > # reload'.
	I1004 01:18:40.001456  151348 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 01:18:40.001468  151348 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 01:18:40.001479  151348 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 01:18:40.001488  151348 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 01:18:40.001496  151348 command_runner.go:130] > [crio]
	I1004 01:18:40.001507  151348 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 01:18:40.001519  151348 command_runner.go:130] > # containers images, in this directory.
	I1004 01:18:40.001527  151348 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 01:18:40.001544  151348 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 01:18:40.001554  151348 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 01:18:40.001560  151348 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 01:18:40.001567  151348 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 01:18:40.001574  151348 command_runner.go:130] > storage_driver = "overlay"
	I1004 01:18:40.001583  151348 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 01:18:40.001598  151348 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 01:18:40.001609  151348 command_runner.go:130] > storage_option = [
	I1004 01:18:40.001617  151348 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 01:18:40.001626  151348 command_runner.go:130] > ]
	I1004 01:18:40.001636  151348 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 01:18:40.001649  151348 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 01:18:40.001657  151348 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 01:18:40.001665  151348 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 01:18:40.001679  151348 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 01:18:40.001691  151348 command_runner.go:130] > # always happen on a node reboot
	I1004 01:18:40.001700  151348 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 01:18:40.001712  151348 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 01:18:40.001725  151348 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 01:18:40.001739  151348 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 01:18:40.001748  151348 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1004 01:18:40.001761  151348 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 01:18:40.001778  151348 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 01:18:40.001789  151348 command_runner.go:130] > # internal_wipe = true
	I1004 01:18:40.001801  151348 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 01:18:40.001814  151348 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 01:18:40.001828  151348 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 01:18:40.001851  151348 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 01:18:40.001865  151348 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 01:18:40.001873  151348 command_runner.go:130] > [crio.api]
	I1004 01:18:40.001883  151348 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 01:18:40.001894  151348 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 01:18:40.001904  151348 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 01:18:40.001912  151348 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 01:18:40.001920  151348 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 01:18:40.001932  151348 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 01:18:40.001944  151348 command_runner.go:130] > # stream_port = "0"
	I1004 01:18:40.001957  151348 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 01:18:40.001967  151348 command_runner.go:130] > # stream_enable_tls = false
	I1004 01:18:40.001980  151348 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 01:18:40.001990  151348 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 01:18:40.002001  151348 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 01:18:40.002009  151348 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 01:18:40.002019  151348 command_runner.go:130] > # minutes.
	I1004 01:18:40.002028  151348 command_runner.go:130] > # stream_tls_cert = ""
	I1004 01:18:40.002041  151348 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 01:18:40.002053  151348 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 01:18:40.002064  151348 command_runner.go:130] > # stream_tls_key = ""
	I1004 01:18:40.002074  151348 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 01:18:40.002085  151348 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 01:18:40.002091  151348 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 01:18:40.002100  151348 command_runner.go:130] > # stream_tls_ca = ""
	I1004 01:18:40.002113  151348 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:18:40.002125  151348 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 01:18:40.002137  151348 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:18:40.002148  151348 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 01:18:40.002172  151348 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 01:18:40.002194  151348 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 01:18:40.002201  151348 command_runner.go:130] > [crio.runtime]
	I1004 01:18:40.002215  151348 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 01:18:40.002228  151348 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 01:18:40.002238  151348 command_runner.go:130] > # "nofile=1024:2048"
	I1004 01:18:40.002252  151348 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 01:18:40.002260  151348 command_runner.go:130] > # default_ulimits = [
	I1004 01:18:40.002264  151348 command_runner.go:130] > # ]
	I1004 01:18:40.002282  151348 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 01:18:40.002293  151348 command_runner.go:130] > # no_pivot = false
	I1004 01:18:40.002304  151348 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 01:18:40.002318  151348 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 01:18:40.002329  151348 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 01:18:40.002341  151348 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 01:18:40.002349  151348 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 01:18:40.002359  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:18:40.002371  151348 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 01:18:40.002382  151348 command_runner.go:130] > # Cgroup setting for conmon
	I1004 01:18:40.002394  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 01:18:40.002404  151348 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 01:18:40.002418  151348 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 01:18:40.002429  151348 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 01:18:40.002439  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:18:40.002450  151348 command_runner.go:130] > conmon_env = [
	I1004 01:18:40.002464  151348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 01:18:40.002473  151348 command_runner.go:130] > ]
	I1004 01:18:40.002482  151348 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 01:18:40.002493  151348 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 01:18:40.002506  151348 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 01:18:40.002514  151348 command_runner.go:130] > # default_env = [
	I1004 01:18:40.002518  151348 command_runner.go:130] > # ]
	I1004 01:18:40.002529  151348 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 01:18:40.002539  151348 command_runner.go:130] > # selinux = false
	I1004 01:18:40.002553  151348 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 01:18:40.002564  151348 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 01:18:40.002573  151348 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 01:18:40.002580  151348 command_runner.go:130] > # seccomp_profile = ""
	I1004 01:18:40.002593  151348 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 01:18:40.002603  151348 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 01:18:40.002613  151348 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 01:18:40.002624  151348 command_runner.go:130] > # which might increase security.
	I1004 01:18:40.002636  151348 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 01:18:40.002647  151348 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 01:18:40.002661  151348 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 01:18:40.002672  151348 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 01:18:40.002685  151348 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 01:18:40.002694  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:18:40.002701  151348 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 01:18:40.002714  151348 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 01:18:40.002725  151348 command_runner.go:130] > # the cgroup blockio controller.
	I1004 01:18:40.002735  151348 command_runner.go:130] > # blockio_config_file = ""
	I1004 01:18:40.002749  151348 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 01:18:40.002759  151348 command_runner.go:130] > # irqbalance daemon.
	I1004 01:18:40.002771  151348 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 01:18:40.002780  151348 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 01:18:40.002792  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:18:40.002802  151348 command_runner.go:130] > # rdt_config_file = ""
	I1004 01:18:40.002815  151348 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 01:18:40.002826  151348 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 01:18:40.002839  151348 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 01:18:40.002850  151348 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 01:18:40.002861  151348 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 01:18:40.002872  151348 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 01:18:40.002883  151348 command_runner.go:130] > # will be added.
	I1004 01:18:40.002894  151348 command_runner.go:130] > # default_capabilities = [
	I1004 01:18:40.002904  151348 command_runner.go:130] > # 	"CHOWN",
	I1004 01:18:40.002913  151348 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 01:18:40.002923  151348 command_runner.go:130] > # 	"FSETID",
	I1004 01:18:40.002929  151348 command_runner.go:130] > # 	"FOWNER",
	I1004 01:18:40.002939  151348 command_runner.go:130] > # 	"SETGID",
	I1004 01:18:40.002946  151348 command_runner.go:130] > # 	"SETUID",
	I1004 01:18:40.002951  151348 command_runner.go:130] > # 	"SETPCAP",
	I1004 01:18:40.002959  151348 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 01:18:40.002968  151348 command_runner.go:130] > # 	"KILL",
	I1004 01:18:40.002978  151348 command_runner.go:130] > # ]
	I1004 01:18:40.002992  151348 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 01:18:40.003005  151348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:18:40.003016  151348 command_runner.go:130] > # default_sysctls = [
	I1004 01:18:40.003022  151348 command_runner.go:130] > # ]
	I1004 01:18:40.003030  151348 command_runner.go:130] > # List of devices on the host that a
	I1004 01:18:40.003037  151348 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 01:18:40.003043  151348 command_runner.go:130] > # allowed_devices = [
	I1004 01:18:40.003050  151348 command_runner.go:130] > # 	"/dev/fuse",
	I1004 01:18:40.003059  151348 command_runner.go:130] > # ]
	I1004 01:18:40.003068  151348 command_runner.go:130] > # List of additional devices. specified as
	I1004 01:18:40.003083  151348 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 01:18:40.003094  151348 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 01:18:40.003117  151348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:18:40.003124  151348 command_runner.go:130] > # additional_devices = [
	I1004 01:18:40.003129  151348 command_runner.go:130] > # ]
	I1004 01:18:40.003141  151348 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 01:18:40.003152  151348 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 01:18:40.003162  151348 command_runner.go:130] > # 	"/etc/cdi",
	I1004 01:18:40.003169  151348 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 01:18:40.003178  151348 command_runner.go:130] > # ]
	I1004 01:18:40.003189  151348 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 01:18:40.003201  151348 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 01:18:40.003206  151348 command_runner.go:130] > # Defaults to false.
	I1004 01:18:40.003211  151348 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 01:18:40.003225  151348 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 01:18:40.003239  151348 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 01:18:40.003249  151348 command_runner.go:130] > # hooks_dir = [
	I1004 01:18:40.003258  151348 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 01:18:40.003273  151348 command_runner.go:130] > # ]
	I1004 01:18:40.003286  151348 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 01:18:40.003294  151348 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 01:18:40.003303  151348 command_runner.go:130] > # its default mounts from the following two files:
	I1004 01:18:40.003309  151348 command_runner.go:130] > #
	I1004 01:18:40.003324  151348 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 01:18:40.003338  151348 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 01:18:40.003351  151348 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 01:18:40.003360  151348 command_runner.go:130] > #
	I1004 01:18:40.003374  151348 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 01:18:40.003384  151348 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 01:18:40.003397  151348 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 01:18:40.003409  151348 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 01:18:40.003418  151348 command_runner.go:130] > #
	I1004 01:18:40.003428  151348 command_runner.go:130] > # default_mounts_file = ""
	I1004 01:18:40.003439  151348 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 01:18:40.003454  151348 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 01:18:40.003463  151348 command_runner.go:130] > pids_limit = 1024
	I1004 01:18:40.003470  151348 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 01:18:40.003482  151348 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 01:18:40.003497  151348 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 01:18:40.003513  151348 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 01:18:40.003524  151348 command_runner.go:130] > # log_size_max = -1
	I1004 01:18:40.003538  151348 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1004 01:18:40.003547  151348 command_runner.go:130] > # log_to_journald = false
	I1004 01:18:40.003556  151348 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 01:18:40.003564  151348 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 01:18:40.003573  151348 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 01:18:40.003586  151348 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 01:18:40.003599  151348 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 01:18:40.003610  151348 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 01:18:40.003622  151348 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 01:18:40.003632  151348 command_runner.go:130] > # read_only = false
	I1004 01:18:40.003642  151348 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 01:18:40.003650  151348 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 01:18:40.003661  151348 command_runner.go:130] > # live configuration reload.
	I1004 01:18:40.003675  151348 command_runner.go:130] > # log_level = "info"
	I1004 01:18:40.003688  151348 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 01:18:40.003700  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:18:40.003710  151348 command_runner.go:130] > # log_filter = ""
	I1004 01:18:40.003723  151348 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 01:18:40.003731  151348 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 01:18:40.003738  151348 command_runner.go:130] > # separated by comma.
	I1004 01:18:40.003748  151348 command_runner.go:130] > # uid_mappings = ""
	I1004 01:18:40.003762  151348 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 01:18:40.003776  151348 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 01:18:40.003786  151348 command_runner.go:130] > # separated by comma.
	I1004 01:18:40.003797  151348 command_runner.go:130] > # gid_mappings = ""
	I1004 01:18:40.003810  151348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 01:18:40.003819  151348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:18:40.003829  151348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:18:40.003840  151348 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 01:18:40.003855  151348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 01:18:40.003868  151348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:18:40.003880  151348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:18:40.003890  151348 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 01:18:40.003900  151348 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 01:18:40.003912  151348 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 01:18:40.003922  151348 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 01:18:40.003933  151348 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 01:18:40.003943  151348 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 01:18:40.003956  151348 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 01:18:40.003967  151348 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 01:18:40.003976  151348 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 01:18:40.003985  151348 command_runner.go:130] > drop_infra_ctr = false
	I1004 01:18:40.003992  151348 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 01:18:40.004002  151348 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 01:18:40.004015  151348 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 01:18:40.004026  151348 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 01:18:40.004036  151348 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 01:18:40.004048  151348 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 01:18:40.004055  151348 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 01:18:40.004069  151348 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 01:18:40.004074  151348 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 01:18:40.004083  151348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 01:18:40.004096  151348 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1004 01:18:40.004109  151348 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1004 01:18:40.004120  151348 command_runner.go:130] > # default_runtime = "runc"
	I1004 01:18:40.004128  151348 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 01:18:40.004143  151348 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 01:18:40.004158  151348 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1004 01:18:40.004167  151348 command_runner.go:130] > # creation as a file is not desired either.
	I1004 01:18:40.004183  151348 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 01:18:40.004195  151348 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 01:18:40.004203  151348 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 01:18:40.004212  151348 command_runner.go:130] > # ]
	I1004 01:18:40.004222  151348 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 01:18:40.004236  151348 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 01:18:40.004246  151348 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1004 01:18:40.004255  151348 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1004 01:18:40.004264  151348 command_runner.go:130] > #
	I1004 01:18:40.004277  151348 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1004 01:18:40.004289  151348 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1004 01:18:40.004299  151348 command_runner.go:130] > #  runtime_type = "oci"
	I1004 01:18:40.004309  151348 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1004 01:18:40.004318  151348 command_runner.go:130] > #  privileged_without_host_devices = false
	I1004 01:18:40.004327  151348 command_runner.go:130] > #  allowed_annotations = []
	I1004 01:18:40.004332  151348 command_runner.go:130] > # Where:
	I1004 01:18:40.004340  151348 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1004 01:18:40.004353  151348 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1004 01:18:40.004368  151348 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 01:18:40.004382  151348 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 01:18:40.004391  151348 command_runner.go:130] > #   in $PATH.
	I1004 01:18:40.004402  151348 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1004 01:18:40.004413  151348 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 01:18:40.004420  151348 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1004 01:18:40.004430  151348 command_runner.go:130] > #   state.
	I1004 01:18:40.004441  151348 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 01:18:40.004455  151348 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 01:18:40.004469  151348 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 01:18:40.004481  151348 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 01:18:40.004494  151348 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 01:18:40.004503  151348 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 01:18:40.004511  151348 command_runner.go:130] > #   The currently recognized values are:
	I1004 01:18:40.004522  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 01:18:40.004538  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 01:18:40.004551  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 01:18:40.004565  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 01:18:40.004581  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 01:18:40.004592  151348 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 01:18:40.004602  151348 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 01:18:40.004616  151348 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1004 01:18:40.004629  151348 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 01:18:40.004640  151348 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 01:18:40.004651  151348 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 01:18:40.004661  151348 command_runner.go:130] > runtime_type = "oci"
	I1004 01:18:40.004671  151348 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 01:18:40.004679  151348 command_runner.go:130] > runtime_config_path = ""
	I1004 01:18:40.004683  151348 command_runner.go:130] > monitor_path = ""
	I1004 01:18:40.004692  151348 command_runner.go:130] > monitor_cgroup = ""
	I1004 01:18:40.004704  151348 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 01:18:40.004718  151348 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1004 01:18:40.004728  151348 command_runner.go:130] > # running containers
	I1004 01:18:40.004739  151348 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1004 01:18:40.004752  151348 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1004 01:18:40.004822  151348 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1004 01:18:40.004844  151348 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1004 01:18:40.004850  151348 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1004 01:18:40.004857  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1004 01:18:40.004864  151348 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1004 01:18:40.004872  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1004 01:18:40.004880  151348 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1004 01:18:40.004902  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1004 01:18:40.004919  151348 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 01:18:40.004930  151348 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 01:18:40.004941  151348 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 01:18:40.004956  151348 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 01:18:40.004973  151348 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 01:18:40.004986  151348 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 01:18:40.005004  151348 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 01:18:40.005022  151348 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 01:18:40.005031  151348 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 01:18:40.005044  151348 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 01:18:40.005054  151348 command_runner.go:130] > # Example:
	I1004 01:18:40.005065  151348 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 01:18:40.005077  151348 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 01:18:40.005086  151348 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 01:18:40.005098  151348 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 01:18:40.005107  151348 command_runner.go:130] > # cpuset = 0
	I1004 01:18:40.005112  151348 command_runner.go:130] > # cpushares = "0-1"
	I1004 01:18:40.005120  151348 command_runner.go:130] > # Where:
	I1004 01:18:40.005128  151348 command_runner.go:130] > # The workload name is workload-type.
	I1004 01:18:40.005144  151348 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 01:18:40.005157  151348 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 01:18:40.005169  151348 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 01:18:40.005185  151348 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 01:18:40.005196  151348 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 01:18:40.005202  151348 command_runner.go:130] > # 
	I1004 01:18:40.005213  151348 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 01:18:40.005223  151348 command_runner.go:130] > #
	I1004 01:18:40.005234  151348 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 01:18:40.005248  151348 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 01:18:40.005261  151348 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 01:18:40.005280  151348 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 01:18:40.005289  151348 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 01:18:40.005298  151348 command_runner.go:130] > [crio.image]
	I1004 01:18:40.005309  151348 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 01:18:40.005321  151348 command_runner.go:130] > # default_transport = "docker://"
	I1004 01:18:40.005334  151348 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 01:18:40.005348  151348 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:18:40.005358  151348 command_runner.go:130] > # global_auth_file = ""
	I1004 01:18:40.005368  151348 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 01:18:40.005378  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:18:40.005389  151348 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1004 01:18:40.005404  151348 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 01:18:40.005418  151348 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:18:40.005429  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:18:40.005439  151348 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 01:18:40.005452  151348 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 01:18:40.005461  151348 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 01:18:40.005477  151348 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 01:18:40.005492  151348 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 01:18:40.005502  151348 command_runner.go:130] > # pause_command = "/pause"
	I1004 01:18:40.005515  151348 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 01:18:40.005529  151348 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 01:18:40.005541  151348 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 01:18:40.005551  151348 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 01:18:40.005564  151348 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 01:18:40.005575  151348 command_runner.go:130] > # signature_policy = ""
	I1004 01:18:40.005585  151348 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 01:18:40.005595  151348 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 01:18:40.005605  151348 command_runner.go:130] > # changing them here.
	I1004 01:18:40.005615  151348 command_runner.go:130] > # insecure_registries = [
	I1004 01:18:40.005623  151348 command_runner.go:130] > # ]
	I1004 01:18:40.005635  151348 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 01:18:40.005642  151348 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 01:18:40.005647  151348 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 01:18:40.005654  151348 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 01:18:40.005665  151348 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 01:18:40.005680  151348 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 01:18:40.005690  151348 command_runner.go:130] > # CNI plugins.
	I1004 01:18:40.005700  151348 command_runner.go:130] > [crio.network]
	I1004 01:18:40.005713  151348 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 01:18:40.005725  151348 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 01:18:40.005734  151348 command_runner.go:130] > # cni_default_network = ""
	I1004 01:18:40.005742  151348 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 01:18:40.005749  151348 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 01:18:40.005755  151348 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 01:18:40.005761  151348 command_runner.go:130] > # plugin_dirs = [
	I1004 01:18:40.005765  151348 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 01:18:40.005769  151348 command_runner.go:130] > # ]
	I1004 01:18:40.005777  151348 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 01:18:40.005784  151348 command_runner.go:130] > [crio.metrics]
	I1004 01:18:40.005789  151348 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 01:18:40.005796  151348 command_runner.go:130] > enable_metrics = true
	I1004 01:18:40.005801  151348 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 01:18:40.005813  151348 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 01:18:40.005828  151348 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 01:18:40.005855  151348 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 01:18:40.005868  151348 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 01:18:40.005876  151348 command_runner.go:130] > # metrics_collectors = [
	I1004 01:18:40.005886  151348 command_runner.go:130] > # 	"operations",
	I1004 01:18:40.005895  151348 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 01:18:40.005906  151348 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 01:18:40.005914  151348 command_runner.go:130] > # 	"operations_errors",
	I1004 01:18:40.005919  151348 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 01:18:40.005924  151348 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 01:18:40.005929  151348 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 01:18:40.005936  151348 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 01:18:40.005940  151348 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 01:18:40.005944  151348 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 01:18:40.005951  151348 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 01:18:40.005955  151348 command_runner.go:130] > # 	"containers_oom_total",
	I1004 01:18:40.005959  151348 command_runner.go:130] > # 	"containers_oom",
	I1004 01:18:40.005963  151348 command_runner.go:130] > # 	"processes_defunct",
	I1004 01:18:40.005969  151348 command_runner.go:130] > # 	"operations_total",
	I1004 01:18:40.005974  151348 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 01:18:40.005985  151348 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 01:18:40.005989  151348 command_runner.go:130] > # 	"operations_errors_total",
	I1004 01:18:40.005994  151348 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 01:18:40.005998  151348 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 01:18:40.006003  151348 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 01:18:40.006008  151348 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 01:18:40.006012  151348 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 01:18:40.006017  151348 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 01:18:40.006021  151348 command_runner.go:130] > # ]
	I1004 01:18:40.006026  151348 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 01:18:40.006032  151348 command_runner.go:130] > # metrics_port = 9090
	I1004 01:18:40.006044  151348 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 01:18:40.006051  151348 command_runner.go:130] > # metrics_socket = ""
	I1004 01:18:40.006063  151348 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 01:18:40.006074  151348 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 01:18:40.006088  151348 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 01:18:40.006099  151348 command_runner.go:130] > # certificate on any modification event.
	I1004 01:18:40.006106  151348 command_runner.go:130] > # metrics_cert = ""
	I1004 01:18:40.006113  151348 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 01:18:40.006118  151348 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 01:18:40.006122  151348 command_runner.go:130] > # metrics_key = ""
	I1004 01:18:40.006127  151348 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 01:18:40.006132  151348 command_runner.go:130] > [crio.tracing]
	I1004 01:18:40.006137  151348 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 01:18:40.006144  151348 command_runner.go:130] > # enable_tracing = false
	I1004 01:18:40.006149  151348 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 01:18:40.006155  151348 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 01:18:40.006160  151348 command_runner.go:130] > # Number of samples to collect per million spans.
	I1004 01:18:40.006167  151348 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 01:18:40.006173  151348 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 01:18:40.006179  151348 command_runner.go:130] > [crio.stats]
	I1004 01:18:40.006185  151348 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 01:18:40.006193  151348 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 01:18:40.006198  151348 command_runner.go:130] > # stats_collection_period = 0
	I1004 01:18:40.006262  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:18:40.006284  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:18:40.006294  151348 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:18:40.006313  151348 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.181 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-038823 NodeName:multinode-038823-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:18:40.006459  151348 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-038823-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.181
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:18:40.006512  151348 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-038823-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:18:40.006565  151348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:18:40.016749  151348 command_runner.go:130] > kubeadm
	I1004 01:18:40.016776  151348 command_runner.go:130] > kubectl
	I1004 01:18:40.016783  151348 command_runner.go:130] > kubelet
	I1004 01:18:40.016858  151348 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:18:40.016927  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1004 01:18:40.026447  151348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1004 01:18:40.044097  151348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:18:40.061172  151348 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1004 01:18:40.065036  151348 command_runner.go:130] > 192.168.39.212	control-plane.minikube.internal
	I1004 01:18:40.065146  151348 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:18:40.065414  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:18:40.065598  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:18:40.065663  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:18:40.081401  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38079
	I1004 01:18:40.081876  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:18:40.082302  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:18:40.082321  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:18:40.082668  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:18:40.082811  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:18:40.082950  151348 start.go:304] JoinCluster: &{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:18:40.083062  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 01:18:40.083078  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:18:40.085654  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:18:40.086074  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:18:40.086106  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:18:40.086249  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:18:40.086436  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:18:40.086565  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:18:40.086676  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:18:40.277420  151348 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token fo0d3e.sg4oehoimw1edtw2 --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:18:40.277477  151348 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:18:40.277518  151348 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:18:40.277956  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:18:40.277997  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:18:40.293766  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I1004 01:18:40.294222  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:18:40.294672  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:18:40.294694  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:18:40.294986  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:18:40.295175  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:18:40.295366  151348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-038823-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1004 01:18:40.295395  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:18:40.297793  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:18:40.298215  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:18:40.298248  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:18:40.298419  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:18:40.298580  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:18:40.298735  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:18:40.298883  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:18:40.468667  151348 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1004 01:18:40.527605  151348 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-cqczw, kube-system/kube-proxy-hgg2z
	I1004 01:18:43.549516  151348 command_runner.go:130] > node/multinode-038823-m02 cordoned
	I1004 01:18:43.549542  151348 command_runner.go:130] > pod "busybox-5bc68d56bd-8g74z" has DeletionTimestamp older than 1 seconds, skipping
	I1004 01:18:43.549548  151348 command_runner.go:130] > node/multinode-038823-m02 drained
	I1004 01:18:43.549569  151348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-038823-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.254179246s)
	I1004 01:18:43.549585  151348 node.go:108] successfully drained node "m02"
	I1004 01:18:43.550108  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:18:43.550448  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:18:43.551028  151348 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1004 01:18:43.551097  151348 round_trippers.go:463] DELETE https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:43.551109  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:43.551120  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:43.551131  151348 round_trippers.go:473]     Content-Type: application/json
	I1004 01:18:43.551140  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:43.567026  151348 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I1004 01:18:43.567045  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:43.567052  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:43.567057  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:43.567062  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:43.567067  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:43.567075  151348 round_trippers.go:580]     Content-Length: 171
	I1004 01:18:43.567083  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:43 GMT
	I1004 01:18:43.567094  151348 round_trippers.go:580]     Audit-Id: 4fcae932-1cd6-4f59-be2e-bcc5ed081b0e
	I1004 01:18:43.567389  151348 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-038823-m02","kind":"nodes","uid":"8f261d3e-ecc3-48fd-b5ac-a323a230eaef"}}
	I1004 01:18:43.567445  151348 node.go:124] successfully deleted node "m02"
	I1004 01:18:43.567457  151348 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:18:43.567485  151348 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:18:43.567511  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token fo0d3e.sg4oehoimw1edtw2 --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-038823-m02"
	I1004 01:18:43.622956  151348 command_runner.go:130] ! W1004 01:18:43.616554    2603 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1004 01:18:43.623415  151348 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1004 01:18:43.776927  151348 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1004 01:18:43.776966  151348 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1004 01:18:44.542769  151348 command_runner.go:130] > [preflight] Running pre-flight checks
	I1004 01:18:44.542801  151348 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1004 01:18:44.542814  151348 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1004 01:18:44.542826  151348 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:18:44.542837  151348 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:18:44.542844  151348 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1004 01:18:44.542854  151348 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1004 01:18:44.542868  151348 command_runner.go:130] > This node has joined the cluster:
	I1004 01:18:44.542880  151348 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1004 01:18:44.542891  151348 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1004 01:18:44.542906  151348 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1004 01:18:44.542940  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 01:18:44.815931  151348 start.go:306] JoinCluster complete in 4.732974245s
	I1004 01:18:44.815964  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:18:44.815971  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:18:44.816033  151348 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 01:18:44.821866  151348 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1004 01:18:44.821888  151348 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1004 01:18:44.821895  151348 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1004 01:18:44.821901  151348 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:18:44.821919  151348 command_runner.go:130] > Access: 2023-10-04 01:16:15.620741754 +0000
	I1004 01:18:44.821924  151348 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1004 01:18:44.821934  151348 command_runner.go:130] > Change: 2023-10-04 01:16:13.762741754 +0000
	I1004 01:18:44.821939  151348 command_runner.go:130] >  Birth: -
	I1004 01:18:44.822217  151348 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 01:18:44.822238  151348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1004 01:18:44.840803  151348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 01:18:45.121830  151348 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:18:45.121881  151348 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:18:45.121887  151348 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1004 01:18:45.121893  151348 command_runner.go:130] > daemonset.apps/kindnet configured
	I1004 01:18:45.122297  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:18:45.122504  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:18:45.122781  151348 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:18:45.122792  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.122800  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.122805  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.128931  151348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 01:18:45.128953  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.128960  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.128969  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.128977  151348 round_trippers.go:580]     Content-Length: 291
	I1004 01:18:45.128986  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.128993  151348 round_trippers.go:580]     Audit-Id: 5a5a661e-8737-4606-b321-023d7656e1e0
	I1004 01:18:45.129001  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.129008  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.129046  151348 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"901","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1004 01:18:45.129179  151348 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-038823" context rescaled to 1 replicas
	I1004 01:18:45.129211  151348 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1004 01:18:45.131687  151348 out.go:177] * Verifying Kubernetes components...
	I1004 01:18:45.133119  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:18:45.150543  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:18:45.150860  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:18:45.151208  151348 node_ready.go:35] waiting up to 6m0s for node "multinode-038823-m02" to be "Ready" ...
	I1004 01:18:45.151310  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:45.151324  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.151334  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.151345  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.153731  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:45.153748  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.153756  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.153761  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.153767  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.153774  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.153780  151348 round_trippers.go:580]     Audit-Id: 92cd00ea-9952-4236-975a-b7c540ab17d5
	I1004 01:18:45.153787  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.153939  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"a4da8b57-0ac8-4804-bc46-62830c7335ea","resourceVersion":"1059","creationTimestamp":"2023-10-04T01:18:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1004 01:18:45.154230  151348 node_ready.go:49] node "multinode-038823-m02" has status "Ready":"True"
	I1004 01:18:45.154245  151348 node_ready.go:38] duration metric: took 3.018496ms waiting for node "multinode-038823-m02" to be "Ready" ...
	I1004 01:18:45.154255  151348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:18:45.154320  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:18:45.154327  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.154334  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.154344  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.157555  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:45.157577  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.157586  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.157595  151348 round_trippers.go:580]     Audit-Id: 6a07e938-30ca-4cd6-ba06-13fb33673e97
	I1004 01:18:45.157603  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.157612  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.157640  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.157648  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.160051  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1063"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82079 chars]
	I1004 01:18:45.162551  151348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.162634  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:18:45.162644  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.162651  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.162658  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.169909  151348 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1004 01:18:45.169928  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.169935  151348 round_trippers.go:580]     Audit-Id: 07f8962b-f4c5-4abe-a346-e4a0e9d57fee
	I1004 01:18:45.169940  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.169945  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.169950  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.169957  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.169965  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.170844  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1004 01:18:45.171274  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:45.171286  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.171293  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.171301  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.175345  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:18:45.175361  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.175367  151348 round_trippers.go:580]     Audit-Id: 9007996c-0cb6-4cc6-8e29-2d30c0397081
	I1004 01:18:45.175372  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.175377  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.175384  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.175391  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.175399  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.175941  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:18:45.176242  151348 pod_ready.go:92] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:45.176259  151348 pod_ready.go:81] duration metric: took 13.6868ms waiting for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.176270  151348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.176337  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-038823
	I1004 01:18:45.176349  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.176360  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.176370  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.181769  151348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 01:18:45.181790  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.181800  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.181807  151348 round_trippers.go:580]     Audit-Id: fe35823b-16ba-4099-9d73-4d19ccd6b6f9
	I1004 01:18:45.181815  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.181824  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.181852  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.181865  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.182044  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"865","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1004 01:18:45.182391  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:45.182401  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.182408  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.182414  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.187902  151348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 01:18:45.187916  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.187922  151348 round_trippers.go:580]     Audit-Id: 17c6732c-73e7-489b-ad47-1668e248aee1
	I1004 01:18:45.187927  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.187932  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.187939  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.187947  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.187955  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.188652  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:18:45.188949  151348 pod_ready.go:92] pod "etcd-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:45.188965  151348 pod_ready.go:81] duration metric: took 12.68238ms waiting for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.188980  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.189036  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-038823
	I1004 01:18:45.189045  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.189052  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.189058  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.194212  151348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 01:18:45.194231  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.194241  151348 round_trippers.go:580]     Audit-Id: e95a40a7-3716-421d-b98c-1246e78d5e69
	I1004 01:18:45.194248  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.194253  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.194258  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.194263  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.194269  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.194457  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-038823","namespace":"kube-system","uid":"8f46d14f-fac3-4029-af40-ad242d6e93e1","resourceVersion":"876","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.mirror":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.seen":"2023-10-04T01:06:24.071714521Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1004 01:18:45.194864  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:45.194875  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.194882  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.194892  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.204577  151348 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1004 01:18:45.204601  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.204613  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.204620  151348 round_trippers.go:580]     Audit-Id: fcbe7945-ec5d-4220-8296-7bfb5beca28f
	I1004 01:18:45.204627  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.204632  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.204637  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.204643  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.204791  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:18:45.205140  151348 pod_ready.go:92] pod "kube-apiserver-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:45.205158  151348 pod_ready.go:81] duration metric: took 16.170829ms waiting for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.205171  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.205228  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-038823
	I1004 01:18:45.205238  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.205249  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.205259  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.211602  151348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 01:18:45.211624  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.211631  151348 round_trippers.go:580]     Audit-Id: 16f73242-ef2f-45ea-8542-5b01e83a4d87
	I1004 01:18:45.211637  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.211642  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.211647  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.211652  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.211657  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.211830  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-038823","namespace":"kube-system","uid":"ace8ff54-191a-4969-bc58-ad0440f25084","resourceVersion":"816","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.mirror":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.seen":"2023-10-04T01:06:24.071715949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1004 01:18:45.212217  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:45.212232  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.212243  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.212254  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.215350  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:45.215374  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.215385  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.215393  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.215400  151348 round_trippers.go:580]     Audit-Id: 56070db3-a7e8-49ea-89e8-4eef3fe3f42e
	I1004 01:18:45.215408  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.215416  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.215429  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.215596  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:18:45.215969  151348 pod_ready.go:92] pod "kube-controller-manager-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:45.215987  151348 pod_ready.go:81] duration metric: took 10.806828ms waiting for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.215999  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:45.351359  151348 request.go:629] Waited for 135.283927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:18:45.351438  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:18:45.351446  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.351458  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.351469  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.354097  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:45.354119  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.354129  151348 round_trippers.go:580]     Audit-Id: e94c5f27-eea1-4939-a105-33a4626aafcd
	I1004 01:18:45.354136  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.354144  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.354153  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.354166  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.354178  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.354291  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"1061","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1004 01:18:45.552068  151348 request.go:629] Waited for 197.365874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:45.552147  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:45.552152  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.552159  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.552166  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.554898  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:45.554920  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.554931  151348 round_trippers.go:580]     Audit-Id: 5ab994db-8eb5-42c2-a3d3-7035759caa1f
	I1004 01:18:45.554939  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.554947  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.554954  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.554968  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.554980  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.555119  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"a4da8b57-0ac8-4804-bc46-62830c7335ea","resourceVersion":"1059","creationTimestamp":"2023-10-04T01:18:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1004 01:18:45.751854  151348 request.go:629] Waited for 196.404565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:18:45.751927  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:18:45.751935  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.751946  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.751956  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.755150  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:45.755178  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.755189  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.755200  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.755209  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.755218  151348 round_trippers.go:580]     Audit-Id: 61eca08f-48ec-4242-a396-38db2d529f8d
	I1004 01:18:45.755227  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.755237  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.755434  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"1061","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1004 01:18:45.952401  151348 request.go:629] Waited for 196.42048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:45.952493  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:45.952505  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:45.952515  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:45.952527  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:45.956311  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:45.956341  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:45.956352  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:45.956361  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:45 GMT
	I1004 01:18:45.956369  151348 round_trippers.go:580]     Audit-Id: 8cb2a637-5368-471a-a0e6-4734c5648ae9
	I1004 01:18:45.956377  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:45.956386  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:45.956397  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:45.958153  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"a4da8b57-0ac8-4804-bc46-62830c7335ea","resourceVersion":"1059","creationTimestamp":"2023-10-04T01:18:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1004 01:18:46.459404  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:18:46.459433  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:46.459452  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:46.459458  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:46.462075  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:46.462096  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:46.462102  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:46.462108  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:46.462113  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:46.462118  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:46.462123  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:46 GMT
	I1004 01:18:46.462129  151348 round_trippers.go:580]     Audit-Id: 9f070d49-1b5d-41ce-b20e-147293e20c41
	I1004 01:18:46.462303  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"1075","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1004 01:18:46.462792  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:18:46.462807  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:46.462814  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:46.462820  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:46.465709  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:46.465721  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:46.465727  151348 round_trippers.go:580]     Audit-Id: 8d8704eb-672d-4291-96ad-eec2759f256f
	I1004 01:18:46.465732  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:46.465737  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:46.465742  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:46.465747  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:46.465752  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:46 GMT
	I1004 01:18:46.466039  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"a4da8b57-0ac8-4804-bc46-62830c7335ea","resourceVersion":"1059","creationTimestamp":"2023-10-04T01:18:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1004 01:18:46.466332  151348 pod_ready.go:92] pod "kube-proxy-hgg2z" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:46.466349  151348 pod_ready.go:81] duration metric: took 1.250342535s waiting for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:46.466357  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:46.551703  151348 request.go:629] Waited for 85.281708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:18:46.551778  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:18:46.551784  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:46.551792  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:46.551798  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:46.554569  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:46.554593  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:46.554600  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:46.554606  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:46.554611  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:46.554616  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:46 GMT
	I1004 01:18:46.554621  151348 round_trippers.go:580]     Audit-Id: 0ed2137a-44f4-4f7d-8bab-a292a83ed61e
	I1004 01:18:46.554627  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:46.554820  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psqss","generateName":"kube-proxy-","namespace":"kube-system","uid":"455f6f13-5661-4b4e-847b-9266e44c03d8","resourceVersion":"712","creationTimestamp":"2023-10-04T01:08:09Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1004 01:18:46.751680  151348 request.go:629] Waited for 196.355069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:18:46.751746  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:18:46.751751  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:46.751759  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:46.751765  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:46.754583  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:46.754606  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:46.754613  151348 round_trippers.go:580]     Audit-Id: 7c2aaa59-6835-4c8c-8e26-236ccb833884
	I1004 01:18:46.754618  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:46.754623  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:46.754628  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:46.754633  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:46.754639  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:46 GMT
	I1004 01:18:46.754844  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m03","uid":"aecf3685-48bc-4468-b845-c7c671e5cd13","resourceVersion":"792","creationTimestamp":"2023-10-04T01:08:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3412 chars]
	I1004 01:18:46.755143  151348 pod_ready.go:92] pod "kube-proxy-psqss" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:46.755159  151348 pod_ready.go:81] duration metric: took 288.79617ms waiting for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:46.755168  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:46.951515  151348 request.go:629] Waited for 196.285804ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:18:46.951597  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:18:46.951603  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:46.951614  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:46.951625  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:46.957272  151348 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1004 01:18:46.957307  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:46.957318  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:46.957326  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:46.957334  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:46.957342  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:46 GMT
	I1004 01:18:46.957350  151348 round_trippers.go:580]     Audit-Id: 8ffd73f2-0903-417f-a25f-6ab62167687f
	I1004 01:18:46.957358  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:46.958355  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pz9j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"36f00e2f-5611-43ae-94b5-d9dde6784128","resourceVersion":"791","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1004 01:18:47.152244  151348 request.go:629] Waited for 193.373696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:47.152326  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:47.152331  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:47.152340  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:47.152346  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:47.155565  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:47.155594  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:47.155604  151348 round_trippers.go:580]     Audit-Id: 57dd1b7b-9877-438f-bcab-e1d4ad24c519
	I1004 01:18:47.155613  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:47.155621  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:47.155629  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:47.155638  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:47.155646  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:47 GMT
	I1004 01:18:47.156175  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:18:47.156616  151348 pod_ready.go:92] pod "kube-proxy-pz9j4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:47.156638  151348 pod_ready.go:81] duration metric: took 401.46133ms waiting for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:47.156651  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:47.352132  151348 request.go:629] Waited for 195.406388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:18:47.352202  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:18:47.352208  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:47.352215  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:47.352221  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:47.354713  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:18:47.354736  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:47.354743  151348 round_trippers.go:580]     Audit-Id: 7dda8548-d9dc-4a5a-bc5f-d52f69205b1e
	I1004 01:18:47.354748  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:47.354753  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:47.354759  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:47.354766  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:47.354771  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:47 GMT
	I1004 01:18:47.354929  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-038823","namespace":"kube-system","uid":"2da95c67-ae74-41db-a746-455fa043f9a7","resourceVersion":"889","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.mirror":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.seen":"2023-10-04T01:06:24.071717021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1004 01:18:47.551680  151348 request.go:629] Waited for 196.356817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:47.551745  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:18:47.551750  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:47.551758  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:47.551767  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:47.555322  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:47.555342  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:47.555349  151348 round_trippers.go:580]     Audit-Id: c7bb48d1-2330-430a-936a-47bbc4d663f6
	I1004 01:18:47.555355  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:47.555360  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:47.555365  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:47.555370  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:47.555375  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:47 GMT
	I1004 01:18:47.555952  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:18:47.556251  151348 pod_ready.go:92] pod "kube-scheduler-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:18:47.556263  151348 pod_ready.go:81] duration metric: took 399.60544ms waiting for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:18:47.556273  151348 pod_ready.go:38] duration metric: took 2.402007749s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:18:47.556296  151348 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:18:47.556357  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:18:47.572916  151348 system_svc.go:56] duration metric: took 16.623229ms WaitForService to wait for kubelet.
	I1004 01:18:47.572943  151348 kubeadm.go:581] duration metric: took 2.443699422s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:18:47.572972  151348 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:18:47.751755  151348 request.go:629] Waited for 178.71325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1004 01:18:47.751841  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1004 01:18:47.751846  151348 round_trippers.go:469] Request Headers:
	I1004 01:18:47.751855  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:18:47.751861  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:18:47.755446  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:18:47.755468  151348 round_trippers.go:577] Response Headers:
	I1004 01:18:47.755474  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:18:47.755479  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:18:47.755485  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:18:47 GMT
	I1004 01:18:47.755490  151348 round_trippers.go:580]     Audit-Id: 8a94f584-85d3-47c5-85f8-8b8d7b163d00
	I1004 01:18:47.755494  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:18:47.755500  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:18:47.756389  151348 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15105 chars]
	I1004 01:18:47.756950  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:18:47.756970  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:18:47.756978  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:18:47.756982  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:18:47.756987  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:18:47.756992  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:18:47.757002  151348 node_conditions.go:105] duration metric: took 184.023953ms to run NodePressure ...
	I1004 01:18:47.757016  151348 start.go:228] waiting for startup goroutines ...
	I1004 01:18:47.757060  151348 start.go:242] writing updated cluster config ...
	I1004 01:18:47.757494  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:18:47.757592  151348 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:18:47.761005  151348 out.go:177] * Starting worker node multinode-038823-m03 in cluster multinode-038823
	I1004 01:18:47.762490  151348 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:18:47.762516  151348 cache.go:57] Caching tarball of preloaded images
	I1004 01:18:47.762620  151348 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:18:47.762631  151348 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:18:47.762716  151348 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/config.json ...
	I1004 01:18:47.762881  151348 start.go:365] acquiring machines lock for multinode-038823-m03: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:18:47.762922  151348 start.go:369] acquired machines lock for "multinode-038823-m03" in 22.176µs
	I1004 01:18:47.762935  151348 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:18:47.762941  151348 fix.go:54] fixHost starting: m03
	I1004 01:18:47.763194  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:18:47.763227  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:18:47.778071  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35105
	I1004 01:18:47.778525  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:18:47.778989  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:18:47.779011  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:18:47.779350  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:18:47.779571  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:18:47.779735  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetState
	I1004 01:18:47.781270  151348 fix.go:102] recreateIfNeeded on multinode-038823-m03: state=Running err=<nil>
	W1004 01:18:47.781297  151348 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:18:47.783257  151348 out.go:177] * Updating the running kvm2 "multinode-038823-m03" VM ...
	I1004 01:18:47.784726  151348 machine.go:88] provisioning docker machine ...
	I1004 01:18:47.784744  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:18:47.784959  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetMachineName
	I1004 01:18:47.785107  151348 buildroot.go:166] provisioning hostname "multinode-038823-m03"
	I1004 01:18:47.785128  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetMachineName
	I1004 01:18:47.785248  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:18:47.787467  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:47.787910  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:18:47.787929  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:47.788072  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:18:47.788237  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:47.788392  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:47.788531  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:18:47.788681  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:18:47.789130  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1004 01:18:47.789151  151348 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-038823-m03 && echo "multinode-038823-m03" | sudo tee /etc/hostname
	I1004 01:18:47.932338  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-038823-m03
	
	I1004 01:18:47.932372  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:18:47.935353  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:47.935749  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:18:47.935784  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:47.935967  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:18:47.936186  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:47.936366  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:47.936492  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:18:47.936662  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:18:47.937021  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1004 01:18:47.937043  151348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-038823-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-038823-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-038823-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:18:48.062650  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:18:48.062678  151348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:18:48.062697  151348 buildroot.go:174] setting up certificates
	I1004 01:18:48.062705  151348 provision.go:83] configureAuth start
	I1004 01:18:48.062714  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetMachineName
	I1004 01:18:48.063036  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetIP
	I1004 01:18:48.065588  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.066026  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:18:48.066051  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.066198  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:18:48.068566  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.068926  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:18:48.068953  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.069069  151348 provision.go:138] copyHostCerts
	I1004 01:18:48.069092  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:18:48.069119  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:18:48.069129  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:18:48.069192  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:18:48.069259  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:18:48.069285  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:18:48.069292  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:18:48.069322  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:18:48.069365  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:18:48.069380  151348 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:18:48.069388  151348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:18:48.069408  151348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:18:48.069450  151348 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.multinode-038823-m03 san=[192.168.39.44 192.168.39.44 localhost 127.0.0.1 minikube multinode-038823-m03]
	I1004 01:18:48.246266  151348 provision.go:172] copyRemoteCerts
	I1004 01:18:48.246322  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:18:48.246345  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:18:48.248953  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.249360  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:18:48.249397  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.249651  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:18:48.249861  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:48.250027  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:18:48.250169  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m03/id_rsa Username:docker}
	I1004 01:18:48.340340  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1004 01:18:48.340414  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 01:18:48.368204  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1004 01:18:48.368287  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:18:48.392064  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1004 01:18:48.392135  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1004 01:18:48.417719  151348 provision.go:86] duration metric: configureAuth took 354.999674ms
	I1004 01:18:48.417750  151348 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:18:48.418004  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:18:48.418081  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:18:48.420609  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.420997  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:18:48.421032  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:18:48.421184  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:18:48.421369  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:48.421534  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:18:48.421646  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:18:48.421789  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:18:48.422135  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1004 01:18:48.422153  151348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:20:18.968742  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:20:18.968773  151348 machine.go:91] provisioned docker machine in 1m31.184034551s
	I1004 01:20:18.968784  151348 start.go:300] post-start starting for "multinode-038823-m03" (driver="kvm2")
	I1004 01:20:18.968806  151348 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:20:18.968838  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:20:18.969215  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:20:18.969255  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:20:18.972136  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:18.972571  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:20:18.972611  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:18.972746  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:20:18.972968  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:20:18.973134  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:20:18.973272  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m03/id_rsa Username:docker}
	I1004 01:20:19.068955  151348 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:20:19.073127  151348 command_runner.go:130] > NAME=Buildroot
	I1004 01:20:19.073158  151348 command_runner.go:130] > VERSION=2021.02.12-1-gb090841-dirty
	I1004 01:20:19.073165  151348 command_runner.go:130] > ID=buildroot
	I1004 01:20:19.073173  151348 command_runner.go:130] > VERSION_ID=2021.02.12
	I1004 01:20:19.073179  151348 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1004 01:20:19.073212  151348 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:20:19.073229  151348 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:20:19.073325  151348 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:20:19.073407  151348 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:20:19.073417  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /etc/ssl/certs/1355652.pem
	I1004 01:20:19.073518  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:20:19.082636  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:20:19.104750  151348 start.go:303] post-start completed in 135.94231ms
	I1004 01:20:19.104777  151348 fix.go:56] fixHost completed within 1m31.341834625s
	I1004 01:20:19.104804  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:20:19.107584  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.107972  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:20:19.108005  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.108151  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:20:19.108343  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:20:19.108531  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:20:19.108723  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:20:19.108943  151348 main.go:141] libmachine: Using SSH client type: native
	I1004 01:20:19.109396  151348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.44 22 <nil> <nil>}
	I1004 01:20:19.109411  151348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:20:19.235143  151348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696382419.228695798
	
	I1004 01:20:19.235171  151348 fix.go:206] guest clock: 1696382419.228695798
	I1004 01:20:19.235181  151348 fix.go:219] Guest: 2023-10-04 01:20:19.228695798 +0000 UTC Remote: 2023-10-04 01:20:19.104781302 +0000 UTC m=+554.428876381 (delta=123.914496ms)
	I1004 01:20:19.235202  151348 fix.go:190] guest clock delta is within tolerance: 123.914496ms
	I1004 01:20:19.235209  151348 start.go:83] releasing machines lock for "multinode-038823-m03", held for 1m31.472277392s
	I1004 01:20:19.235236  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:20:19.235612  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetIP
	I1004 01:20:19.238582  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.238963  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:20:19.238997  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.241216  151348 out.go:177] * Found network options:
	I1004 01:20:19.243167  151348 out.go:177]   - NO_PROXY=192.168.39.212,192.168.39.181
	W1004 01:20:19.244519  151348 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 01:20:19.244540  151348 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 01:20:19.244555  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:20:19.245174  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:20:19.245402  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .DriverName
	I1004 01:20:19.245514  151348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:20:19.245554  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	W1004 01:20:19.245622  151348 proxy.go:119] fail to check proxy env: Error ip not in block
	W1004 01:20:19.245643  151348 proxy.go:119] fail to check proxy env: Error ip not in block
	I1004 01:20:19.245709  151348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:20:19.245731  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHHostname
	I1004 01:20:19.248380  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.248703  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.248772  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:20:19.248808  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.248954  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:20:19.249127  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:20:19.249284  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:20:19.249286  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:20:19.249318  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:19.249490  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHPort
	I1004 01:20:19.249482  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m03/id_rsa Username:docker}
	I1004 01:20:19.249616  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHKeyPath
	I1004 01:20:19.249800  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetSSHUsername
	I1004 01:20:19.249980  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m03/id_rsa Username:docker}
	I1004 01:20:19.488683  151348 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1004 01:20:19.488737  151348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1004 01:20:19.494761  151348 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1004 01:20:19.494802  151348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:20:19.494870  151348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:20:19.503058  151348 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 01:20:19.503088  151348 start.go:469] detecting cgroup driver to use...
	I1004 01:20:19.503154  151348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:20:19.516816  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:20:19.529383  151348 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:20:19.529444  151348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:20:19.544226  151348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:20:19.557956  151348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:20:19.694668  151348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:20:19.846628  151348 docker.go:213] disabling docker service ...
	I1004 01:20:19.846704  151348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:20:19.862686  151348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:20:19.875998  151348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:20:20.003465  151348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:20:20.132459  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:20:20.146744  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:20:20.164391  151348 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1004 01:20:20.164425  151348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:20:20.164482  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:20:20.175605  151348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:20:20.175665  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:20:20.186879  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:20:20.197349  151348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:20:20.207769  151348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:20:20.218387  151348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:20:20.229021  151348 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1004 01:20:20.229238  151348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:20:20.239599  151348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:20:20.376390  151348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:20:20.610107  151348 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:20:20.610183  151348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:20:20.622770  151348 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1004 01:20:20.622799  151348 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1004 01:20:20.622811  151348 command_runner.go:130] > Device: 16h/22d	Inode: 1162        Links: 1
	I1004 01:20:20.622822  151348 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:20:20.622830  151348 command_runner.go:130] > Access: 2023-10-04 01:20:20.534281107 +0000
	I1004 01:20:20.622839  151348 command_runner.go:130] > Modify: 2023-10-04 01:20:20.534281107 +0000
	I1004 01:20:20.622853  151348 command_runner.go:130] > Change: 2023-10-04 01:20:20.534281107 +0000
	I1004 01:20:20.622859  151348 command_runner.go:130] >  Birth: -
	I1004 01:20:20.623167  151348 start.go:537] Will wait 60s for crictl version
	I1004 01:20:20.623243  151348 ssh_runner.go:195] Run: which crictl
	I1004 01:20:20.627329  151348 command_runner.go:130] > /usr/bin/crictl
	I1004 01:20:20.627598  151348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:20:20.670893  151348 command_runner.go:130] > Version:  0.1.0
	I1004 01:20:20.670915  151348 command_runner.go:130] > RuntimeName:  cri-o
	I1004 01:20:20.670922  151348 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1004 01:20:20.670939  151348 command_runner.go:130] > RuntimeApiVersion:  v1
	I1004 01:20:20.670961  151348 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:20:20.671022  151348 ssh_runner.go:195] Run: crio --version
	I1004 01:20:20.723545  151348 command_runner.go:130] > crio version 1.24.1
	I1004 01:20:20.723572  151348 command_runner.go:130] > Version:          1.24.1
	I1004 01:20:20.723584  151348 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:20:20.723592  151348 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:20:20.723601  151348 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:20:20.723609  151348 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:20:20.723616  151348 command_runner.go:130] > Compiler:         gc
	I1004 01:20:20.723627  151348 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:20:20.723643  151348 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:20:20.723657  151348 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:20:20.723667  151348 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:20:20.723677  151348 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:20:20.725326  151348 ssh_runner.go:195] Run: crio --version
	I1004 01:20:20.777395  151348 command_runner.go:130] > crio version 1.24.1
	I1004 01:20:20.777420  151348 command_runner.go:130] > Version:          1.24.1
	I1004 01:20:20.777432  151348 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1004 01:20:20.777439  151348 command_runner.go:130] > GitTreeState:     dirty
	I1004 01:20:20.777448  151348 command_runner.go:130] > BuildDate:        2023-09-18T23:54:21Z
	I1004 01:20:20.777459  151348 command_runner.go:130] > GoVersion:        go1.19.9
	I1004 01:20:20.777466  151348 command_runner.go:130] > Compiler:         gc
	I1004 01:20:20.777474  151348 command_runner.go:130] > Platform:         linux/amd64
	I1004 01:20:20.777483  151348 command_runner.go:130] > Linkmode:         dynamic
	I1004 01:20:20.777498  151348 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1004 01:20:20.777510  151348 command_runner.go:130] > SeccompEnabled:   true
	I1004 01:20:20.777520  151348 command_runner.go:130] > AppArmorEnabled:  false
	I1004 01:20:20.781018  151348 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:20:20.782508  151348 out.go:177]   - env NO_PROXY=192.168.39.212
	I1004 01:20:20.783849  151348 out.go:177]   - env NO_PROXY=192.168.39.212,192.168.39.181
	I1004 01:20:20.785064  151348 main.go:141] libmachine: (multinode-038823-m03) Calling .GetIP
	I1004 01:20:20.787836  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:20.788183  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:a5:44", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:08:44 +0000 UTC Type:0 Mac:52:54:00:69:a5:44 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-038823-m03 Clientid:01:52:54:00:69:a5:44}
	I1004 01:20:20.788217  151348 main.go:141] libmachine: (multinode-038823-m03) DBG | domain multinode-038823-m03 has defined IP address 192.168.39.44 and MAC address 52:54:00:69:a5:44 in network mk-multinode-038823
	I1004 01:20:20.788455  151348 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 01:20:20.793221  151348 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1004 01:20:20.793262  151348 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823 for IP: 192.168.39.44
	I1004 01:20:20.793279  151348 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:20:20.793419  151348 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:20:20.793454  151348 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:20:20.793466  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1004 01:20:20.793480  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1004 01:20:20.793492  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1004 01:20:20.793505  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1004 01:20:20.793558  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:20:20.793585  151348 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:20:20.793595  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:20:20.793618  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:20:20.793641  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:20:20.793663  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:20:20.793707  151348 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:20:20.793732  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> /usr/share/ca-certificates/1355652.pem
	I1004 01:20:20.793745  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:20:20.793757  151348 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem -> /usr/share/ca-certificates/135565.pem
	I1004 01:20:20.794151  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:20:20.821021  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:20:20.847922  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:20:20.872032  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:20:20.894382  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:20:20.917721  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:20:20.942367  151348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:20:20.966197  151348 ssh_runner.go:195] Run: openssl version
	I1004 01:20:20.972563  151348 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1004 01:20:20.972814  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:20:20.984516  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:20:20.990318  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:20:20.990387  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:20:20.990435  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:20:20.996108  151348 command_runner.go:130] > 3ec20f2e
	I1004 01:20:20.996165  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:20:21.005454  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:20:21.016325  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:20:21.021102  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:20:21.021201  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:20:21.021255  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:20:21.026980  151348 command_runner.go:130] > b5213941
	I1004 01:20:21.027056  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:20:21.036931  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:20:21.050265  151348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:20:21.055206  151348 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:20:21.055464  151348 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:20:21.055520  151348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:20:21.061487  151348 command_runner.go:130] > 51391683
	I1004 01:20:21.061542  151348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:20:21.075986  151348 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:20:21.090603  151348 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:20:21.090935  151348 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 01:20:21.091040  151348 ssh_runner.go:195] Run: crio config
	I1004 01:20:21.147217  151348 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1004 01:20:21.147240  151348 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1004 01:20:21.147247  151348 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1004 01:20:21.147253  151348 command_runner.go:130] > #
	I1004 01:20:21.147266  151348 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1004 01:20:21.147277  151348 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1004 01:20:21.147287  151348 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1004 01:20:21.147296  151348 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1004 01:20:21.147300  151348 command_runner.go:130] > # reload'.
	I1004 01:20:21.147309  151348 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1004 01:20:21.147319  151348 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1004 01:20:21.147325  151348 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1004 01:20:21.147333  151348 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1004 01:20:21.147339  151348 command_runner.go:130] > [crio]
	I1004 01:20:21.147353  151348 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1004 01:20:21.147365  151348 command_runner.go:130] > # containers images, in this directory.
	I1004 01:20:21.147377  151348 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1004 01:20:21.147394  151348 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1004 01:20:21.147403  151348 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1004 01:20:21.147409  151348 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1004 01:20:21.147418  151348 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1004 01:20:21.147425  151348 command_runner.go:130] > storage_driver = "overlay"
	I1004 01:20:21.147438  151348 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1004 01:20:21.147452  151348 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1004 01:20:21.147462  151348 command_runner.go:130] > storage_option = [
	I1004 01:20:21.147470  151348 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1004 01:20:21.147479  151348 command_runner.go:130] > ]
	I1004 01:20:21.147490  151348 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1004 01:20:21.147503  151348 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1004 01:20:21.147514  151348 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1004 01:20:21.147534  151348 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1004 01:20:21.147547  151348 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1004 01:20:21.147558  151348 command_runner.go:130] > # always happen on a node reboot
	I1004 01:20:21.147566  151348 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1004 01:20:21.147579  151348 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1004 01:20:21.147590  151348 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1004 01:20:21.147605  151348 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1004 01:20:21.147618  151348 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1004 01:20:21.147632  151348 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1004 01:20:21.147649  151348 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1004 01:20:21.147658  151348 command_runner.go:130] > # internal_wipe = true
	I1004 01:20:21.147666  151348 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1004 01:20:21.147678  151348 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1004 01:20:21.147691  151348 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1004 01:20:21.147703  151348 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1004 01:20:21.147716  151348 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1004 01:20:21.147727  151348 command_runner.go:130] > [crio.api]
	I1004 01:20:21.147738  151348 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1004 01:20:21.147749  151348 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1004 01:20:21.147761  151348 command_runner.go:130] > # IP address on which the stream server will listen.
	I1004 01:20:21.147770  151348 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1004 01:20:21.147784  151348 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1004 01:20:21.147795  151348 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1004 01:20:21.147804  151348 command_runner.go:130] > # stream_port = "0"
	I1004 01:20:21.147809  151348 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1004 01:20:21.147836  151348 command_runner.go:130] > # stream_enable_tls = false
	I1004 01:20:21.147845  151348 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1004 01:20:21.147850  151348 command_runner.go:130] > # stream_idle_timeout = ""
	I1004 01:20:21.147857  151348 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1004 01:20:21.147863  151348 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1004 01:20:21.147869  151348 command_runner.go:130] > # minutes.
	I1004 01:20:21.147873  151348 command_runner.go:130] > # stream_tls_cert = ""
	I1004 01:20:21.147879  151348 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1004 01:20:21.147887  151348 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1004 01:20:21.147893  151348 command_runner.go:130] > # stream_tls_key = ""
	I1004 01:20:21.147901  151348 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1004 01:20:21.147908  151348 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1004 01:20:21.147916  151348 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1004 01:20:21.147920  151348 command_runner.go:130] > # stream_tls_ca = ""
	I1004 01:20:21.147930  151348 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:20:21.147935  151348 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1004 01:20:21.147944  151348 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1004 01:20:21.147949  151348 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1004 01:20:21.147961  151348 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1004 01:20:21.147969  151348 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1004 01:20:21.147973  151348 command_runner.go:130] > [crio.runtime]
	I1004 01:20:21.147982  151348 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1004 01:20:21.147987  151348 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1004 01:20:21.147994  151348 command_runner.go:130] > # "nofile=1024:2048"
	I1004 01:20:21.148000  151348 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1004 01:20:21.148006  151348 command_runner.go:130] > # default_ulimits = [
	I1004 01:20:21.148009  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148016  151348 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1004 01:20:21.148022  151348 command_runner.go:130] > # no_pivot = false
	I1004 01:20:21.148028  151348 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1004 01:20:21.148036  151348 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1004 01:20:21.148042  151348 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1004 01:20:21.148050  151348 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1004 01:20:21.148055  151348 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1004 01:20:21.148062  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:20:21.148067  151348 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1004 01:20:21.148073  151348 command_runner.go:130] > # Cgroup setting for conmon
	I1004 01:20:21.148084  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1004 01:20:21.148094  151348 command_runner.go:130] > conmon_cgroup = "pod"
	I1004 01:20:21.148103  151348 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1004 01:20:21.148111  151348 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1004 01:20:21.148118  151348 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1004 01:20:21.148127  151348 command_runner.go:130] > conmon_env = [
	I1004 01:20:21.148144  151348 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1004 01:20:21.148153  151348 command_runner.go:130] > ]
	I1004 01:20:21.148164  151348 command_runner.go:130] > # Additional environment variables to set for all the
	I1004 01:20:21.148177  151348 command_runner.go:130] > # containers. These are overridden if set in the
	I1004 01:20:21.148188  151348 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1004 01:20:21.148195  151348 command_runner.go:130] > # default_env = [
	I1004 01:20:21.148203  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148211  151348 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1004 01:20:21.148215  151348 command_runner.go:130] > # selinux = false
	I1004 01:20:21.148222  151348 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1004 01:20:21.148234  151348 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1004 01:20:21.148247  151348 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1004 01:20:21.148257  151348 command_runner.go:130] > # seccomp_profile = ""
	I1004 01:20:21.148267  151348 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1004 01:20:21.148281  151348 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1004 01:20:21.148293  151348 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1004 01:20:21.148301  151348 command_runner.go:130] > # which might increase security.
	I1004 01:20:21.148307  151348 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1004 01:20:21.148315  151348 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1004 01:20:21.148322  151348 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1004 01:20:21.148331  151348 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1004 01:20:21.148337  151348 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1004 01:20:21.148344  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:20:21.148349  151348 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1004 01:20:21.148362  151348 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1004 01:20:21.148374  151348 command_runner.go:130] > # the cgroup blockio controller.
	I1004 01:20:21.148389  151348 command_runner.go:130] > # blockio_config_file = ""
	I1004 01:20:21.148403  151348 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1004 01:20:21.148413  151348 command_runner.go:130] > # irqbalance daemon.
	I1004 01:20:21.148423  151348 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1004 01:20:21.148438  151348 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1004 01:20:21.148450  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:20:21.148460  151348 command_runner.go:130] > # rdt_config_file = ""
	I1004 01:20:21.148468  151348 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1004 01:20:21.148476  151348 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1004 01:20:21.148501  151348 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1004 01:20:21.148509  151348 command_runner.go:130] > # separate_pull_cgroup = ""
	I1004 01:20:21.148515  151348 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1004 01:20:21.148522  151348 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1004 01:20:21.148526  151348 command_runner.go:130] > # will be added.
	I1004 01:20:21.148531  151348 command_runner.go:130] > # default_capabilities = [
	I1004 01:20:21.148535  151348 command_runner.go:130] > # 	"CHOWN",
	I1004 01:20:21.148539  151348 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1004 01:20:21.148543  151348 command_runner.go:130] > # 	"FSETID",
	I1004 01:20:21.148550  151348 command_runner.go:130] > # 	"FOWNER",
	I1004 01:20:21.148554  151348 command_runner.go:130] > # 	"SETGID",
	I1004 01:20:21.148560  151348 command_runner.go:130] > # 	"SETUID",
	I1004 01:20:21.148564  151348 command_runner.go:130] > # 	"SETPCAP",
	I1004 01:20:21.148571  151348 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1004 01:20:21.148574  151348 command_runner.go:130] > # 	"KILL",
	I1004 01:20:21.148580  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148589  151348 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1004 01:20:21.148602  151348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:20:21.148613  151348 command_runner.go:130] > # default_sysctls = [
	I1004 01:20:21.148623  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148632  151348 command_runner.go:130] > # List of devices on the host that a
	I1004 01:20:21.148646  151348 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1004 01:20:21.148652  151348 command_runner.go:130] > # allowed_devices = [
	I1004 01:20:21.148660  151348 command_runner.go:130] > # 	"/dev/fuse",
	I1004 01:20:21.148669  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148678  151348 command_runner.go:130] > # List of additional devices. specified as
	I1004 01:20:21.148694  151348 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1004 01:20:21.148707  151348 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1004 01:20:21.148741  151348 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1004 01:20:21.148752  151348 command_runner.go:130] > # additional_devices = [
	I1004 01:20:21.148762  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148773  151348 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1004 01:20:21.148783  151348 command_runner.go:130] > # cdi_spec_dirs = [
	I1004 01:20:21.148793  151348 command_runner.go:130] > # 	"/etc/cdi",
	I1004 01:20:21.148801  151348 command_runner.go:130] > # 	"/var/run/cdi",
	I1004 01:20:21.148810  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148822  151348 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1004 01:20:21.148836  151348 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1004 01:20:21.148847  151348 command_runner.go:130] > # Defaults to false.
	I1004 01:20:21.148860  151348 command_runner.go:130] > # device_ownership_from_security_context = false
	I1004 01:20:21.148875  151348 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1004 01:20:21.148888  151348 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1004 01:20:21.148896  151348 command_runner.go:130] > # hooks_dir = [
	I1004 01:20:21.148908  151348 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1004 01:20:21.148918  151348 command_runner.go:130] > # ]
	I1004 01:20:21.148930  151348 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1004 01:20:21.148944  151348 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1004 01:20:21.148957  151348 command_runner.go:130] > # its default mounts from the following two files:
	I1004 01:20:21.148966  151348 command_runner.go:130] > #
	I1004 01:20:21.148977  151348 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1004 01:20:21.148992  151348 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1004 01:20:21.149006  151348 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1004 01:20:21.149015  151348 command_runner.go:130] > #
	I1004 01:20:21.149027  151348 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1004 01:20:21.149041  151348 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1004 01:20:21.149057  151348 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1004 01:20:21.149068  151348 command_runner.go:130] > #      only add mounts it finds in this file.
	I1004 01:20:21.149074  151348 command_runner.go:130] > #
	I1004 01:20:21.149082  151348 command_runner.go:130] > # default_mounts_file = ""
	I1004 01:20:21.149091  151348 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1004 01:20:21.149103  151348 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1004 01:20:21.149113  151348 command_runner.go:130] > pids_limit = 1024
	I1004 01:20:21.149125  151348 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1004 01:20:21.149144  151348 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1004 01:20:21.149157  151348 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1004 01:20:21.149175  151348 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1004 01:20:21.149185  151348 command_runner.go:130] > # log_size_max = -1
	I1004 01:20:21.149199  151348 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1004 01:20:21.149211  151348 command_runner.go:130] > # log_to_journald = false
	I1004 01:20:21.149222  151348 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1004 01:20:21.149237  151348 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1004 01:20:21.149246  151348 command_runner.go:130] > # Path to directory for container attach sockets.
	I1004 01:20:21.149255  151348 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1004 01:20:21.149265  151348 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1004 01:20:21.149272  151348 command_runner.go:130] > # bind_mount_prefix = ""
	I1004 01:20:21.149284  151348 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1004 01:20:21.149293  151348 command_runner.go:130] > # read_only = false
	I1004 01:20:21.149300  151348 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1004 01:20:21.149313  151348 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1004 01:20:21.149324  151348 command_runner.go:130] > # live configuration reload.
	I1004 01:20:21.149335  151348 command_runner.go:130] > # log_level = "info"
	I1004 01:20:21.149345  151348 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1004 01:20:21.149357  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:20:21.149364  151348 command_runner.go:130] > # log_filter = ""
	I1004 01:20:21.149378  151348 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1004 01:20:21.149387  151348 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1004 01:20:21.149393  151348 command_runner.go:130] > # separated by comma.
	I1004 01:20:21.149403  151348 command_runner.go:130] > # uid_mappings = ""
	I1004 01:20:21.149417  151348 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1004 01:20:21.149431  151348 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1004 01:20:21.149441  151348 command_runner.go:130] > # separated by comma.
	I1004 01:20:21.149474  151348 command_runner.go:130] > # gid_mappings = ""
	I1004 01:20:21.149488  151348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1004 01:20:21.149499  151348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:20:21.149509  151348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:20:21.149520  151348 command_runner.go:130] > # minimum_mappable_uid = -1
	I1004 01:20:21.149531  151348 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1004 01:20:21.149541  151348 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1004 01:20:21.149555  151348 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1004 01:20:21.149560  151348 command_runner.go:130] > # minimum_mappable_gid = -1
	I1004 01:20:21.149568  151348 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1004 01:20:21.149579  151348 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1004 01:20:21.149593  151348 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1004 01:20:21.149603  151348 command_runner.go:130] > # ctr_stop_timeout = 30
	I1004 01:20:21.149615  151348 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1004 01:20:21.149628  151348 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1004 01:20:21.149639  151348 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1004 01:20:21.149647  151348 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1004 01:20:21.149653  151348 command_runner.go:130] > drop_infra_ctr = false
	I1004 01:20:21.149667  151348 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1004 01:20:21.149680  151348 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1004 01:20:21.149697  151348 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1004 01:20:21.149707  151348 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1004 01:20:21.149721  151348 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1004 01:20:21.149730  151348 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1004 01:20:21.149738  151348 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1004 01:20:21.149749  151348 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1004 01:20:21.149761  151348 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1004 01:20:21.149774  151348 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1004 01:20:21.149789  151348 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1004 01:20:21.149804  151348 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1004 01:20:21.149814  151348 command_runner.go:130] > # default_runtime = "runc"
	I1004 01:20:21.149819  151348 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1004 01:20:21.149831  151348 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1004 01:20:21.149859  151348 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1004 01:20:21.149869  151348 command_runner.go:130] > # creation as a file is not desired either.
	I1004 01:20:21.149886  151348 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1004 01:20:21.149898  151348 command_runner.go:130] > # the hostname is being managed dynamically.
	I1004 01:20:21.149909  151348 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1004 01:20:21.149915  151348 command_runner.go:130] > # ]
	I1004 01:20:21.149921  151348 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1004 01:20:21.149936  151348 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1004 01:20:21.149950  151348 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1004 01:20:21.149964  151348 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1004 01:20:21.149972  151348 command_runner.go:130] > #
	I1004 01:20:21.149981  151348 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1004 01:20:21.149993  151348 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1004 01:20:21.150001  151348 command_runner.go:130] > #  runtime_type = "oci"
	I1004 01:20:21.150006  151348 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1004 01:20:21.150017  151348 command_runner.go:130] > #  privileged_without_host_devices = false
	I1004 01:20:21.150026  151348 command_runner.go:130] > #  allowed_annotations = []
	I1004 01:20:21.150036  151348 command_runner.go:130] > # Where:
	I1004 01:20:21.150045  151348 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1004 01:20:21.150059  151348 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1004 01:20:21.150073  151348 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1004 01:20:21.150085  151348 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1004 01:20:21.150091  151348 command_runner.go:130] > #   in $PATH.
	I1004 01:20:21.150101  151348 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1004 01:20:21.150113  151348 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1004 01:20:21.150127  151348 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1004 01:20:21.150139  151348 command_runner.go:130] > #   state.
	I1004 01:20:21.150153  151348 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1004 01:20:21.150167  151348 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1004 01:20:21.150177  151348 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1004 01:20:21.150187  151348 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1004 01:20:21.150202  151348 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1004 01:20:21.150213  151348 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1004 01:20:21.150224  151348 command_runner.go:130] > #   The currently recognized values are:
	I1004 01:20:21.150238  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1004 01:20:21.150271  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1004 01:20:21.150286  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1004 01:20:21.150297  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1004 01:20:21.150312  151348 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1004 01:20:21.150326  151348 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1004 01:20:21.150339  151348 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1004 01:20:21.150350  151348 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1004 01:20:21.150361  151348 command_runner.go:130] > #   should be moved to the container's cgroup
	I1004 01:20:21.150372  151348 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1004 01:20:21.150383  151348 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1004 01:20:21.150393  151348 command_runner.go:130] > runtime_type = "oci"
	I1004 01:20:21.150404  151348 command_runner.go:130] > runtime_root = "/run/runc"
	I1004 01:20:21.150414  151348 command_runner.go:130] > runtime_config_path = ""
	I1004 01:20:21.150425  151348 command_runner.go:130] > monitor_path = ""
	I1004 01:20:21.150433  151348 command_runner.go:130] > monitor_cgroup = ""
	I1004 01:20:21.150440  151348 command_runner.go:130] > monitor_exec_cgroup = ""
	I1004 01:20:21.150450  151348 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1004 01:20:21.150461  151348 command_runner.go:130] > # running containers
	I1004 01:20:21.150472  151348 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1004 01:20:21.150482  151348 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1004 01:20:21.150516  151348 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1004 01:20:21.150525  151348 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1004 01:20:21.150534  151348 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1004 01:20:21.150545  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1004 01:20:21.150558  151348 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1004 01:20:21.150568  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1004 01:20:21.150579  151348 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1004 01:20:21.150589  151348 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1004 01:20:21.150602  151348 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1004 01:20:21.150611  151348 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1004 01:20:21.150624  151348 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1004 01:20:21.150642  151348 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1004 01:20:21.150658  151348 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1004 01:20:21.150670  151348 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1004 01:20:21.150688  151348 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1004 01:20:21.150742  151348 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1004 01:20:21.150765  151348 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1004 01:20:21.150775  151348 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1004 01:20:21.150782  151348 command_runner.go:130] > # Example:
	I1004 01:20:21.150790  151348 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1004 01:20:21.150801  151348 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1004 01:20:21.150814  151348 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1004 01:20:21.150827  151348 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1004 01:20:21.150836  151348 command_runner.go:130] > # cpuset = 0
	I1004 01:20:21.150846  151348 command_runner.go:130] > # cpushares = "0-1"
	I1004 01:20:21.150855  151348 command_runner.go:130] > # Where:
	I1004 01:20:21.150865  151348 command_runner.go:130] > # The workload name is workload-type.
	I1004 01:20:21.150874  151348 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1004 01:20:21.150886  151348 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1004 01:20:21.150900  151348 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1004 01:20:21.150916  151348 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1004 01:20:21.150929  151348 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1004 01:20:21.150938  151348 command_runner.go:130] > # 
	I1004 01:20:21.150948  151348 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1004 01:20:21.150954  151348 command_runner.go:130] > #
	I1004 01:20:21.150963  151348 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1004 01:20:21.150977  151348 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1004 01:20:21.150991  151348 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1004 01:20:21.151005  151348 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1004 01:20:21.151017  151348 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1004 01:20:21.151028  151348 command_runner.go:130] > [crio.image]
	I1004 01:20:21.151036  151348 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1004 01:20:21.151044  151348 command_runner.go:130] > # default_transport = "docker://"
	I1004 01:20:21.151054  151348 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1004 01:20:21.151068  151348 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:20:21.151079  151348 command_runner.go:130] > # global_auth_file = ""
	I1004 01:20:21.151090  151348 command_runner.go:130] > # The image used to instantiate infra containers.
	I1004 01:20:21.151102  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:20:21.151113  151348 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1004 01:20:21.151125  151348 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1004 01:20:21.151169  151348 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1004 01:20:21.151182  151348 command_runner.go:130] > # This option supports live configuration reload.
	I1004 01:20:21.151192  151348 command_runner.go:130] > # pause_image_auth_file = ""
	I1004 01:20:21.151204  151348 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1004 01:20:21.151214  151348 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1004 01:20:21.151224  151348 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1004 01:20:21.151237  151348 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1004 01:20:21.151249  151348 command_runner.go:130] > # pause_command = "/pause"
	I1004 01:20:21.151262  151348 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1004 01:20:21.151276  151348 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1004 01:20:21.151287  151348 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1004 01:20:21.151297  151348 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1004 01:20:21.151303  151348 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1004 01:20:21.151313  151348 command_runner.go:130] > # signature_policy = ""
	I1004 01:20:21.151324  151348 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1004 01:20:21.151338  151348 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1004 01:20:21.151347  151348 command_runner.go:130] > # changing them here.
	I1004 01:20:21.151358  151348 command_runner.go:130] > # insecure_registries = [
	I1004 01:20:21.151364  151348 command_runner.go:130] > # ]
	I1004 01:20:21.151378  151348 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1004 01:20:21.151386  151348 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1004 01:20:21.151392  151348 command_runner.go:130] > # image_volumes = "mkdir"
	I1004 01:20:21.151404  151348 command_runner.go:130] > # Temporary directory to use for storing big files
	I1004 01:20:21.151411  151348 command_runner.go:130] > # big_files_temporary_dir = ""
	I1004 01:20:21.151425  151348 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1004 01:20:21.151435  151348 command_runner.go:130] > # CNI plugins.
	I1004 01:20:21.151445  151348 command_runner.go:130] > [crio.network]
	I1004 01:20:21.151458  151348 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1004 01:20:21.151468  151348 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1004 01:20:21.151475  151348 command_runner.go:130] > # cni_default_network = ""
	I1004 01:20:21.151485  151348 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1004 01:20:21.151497  151348 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1004 01:20:21.151510  151348 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1004 01:20:21.151520  151348 command_runner.go:130] > # plugin_dirs = [
	I1004 01:20:21.151529  151348 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1004 01:20:21.151538  151348 command_runner.go:130] > # ]
	I1004 01:20:21.151548  151348 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1004 01:20:21.151555  151348 command_runner.go:130] > [crio.metrics]
	I1004 01:20:21.151561  151348 command_runner.go:130] > # Globally enable or disable metrics support.
	I1004 01:20:21.151567  151348 command_runner.go:130] > enable_metrics = true
	I1004 01:20:21.151572  151348 command_runner.go:130] > # Specify enabled metrics collectors.
	I1004 01:20:21.151579  151348 command_runner.go:130] > # Per default all metrics are enabled.
	I1004 01:20:21.151589  151348 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1004 01:20:21.151604  151348 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1004 01:20:21.151617  151348 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1004 01:20:21.151625  151348 command_runner.go:130] > # metrics_collectors = [
	I1004 01:20:21.151635  151348 command_runner.go:130] > # 	"operations",
	I1004 01:20:21.151643  151348 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1004 01:20:21.151654  151348 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1004 01:20:21.151662  151348 command_runner.go:130] > # 	"operations_errors",
	I1004 01:20:21.151667  151348 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1004 01:20:21.151671  151348 command_runner.go:130] > # 	"image_pulls_by_name",
	I1004 01:20:21.151676  151348 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1004 01:20:21.151682  151348 command_runner.go:130] > # 	"image_pulls_failures",
	I1004 01:20:21.151686  151348 command_runner.go:130] > # 	"image_pulls_successes",
	I1004 01:20:21.151693  151348 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1004 01:20:21.151697  151348 command_runner.go:130] > # 	"image_layer_reuse",
	I1004 01:20:21.151701  151348 command_runner.go:130] > # 	"containers_oom_total",
	I1004 01:20:21.151707  151348 command_runner.go:130] > # 	"containers_oom",
	I1004 01:20:21.151712  151348 command_runner.go:130] > # 	"processes_defunct",
	I1004 01:20:21.151718  151348 command_runner.go:130] > # 	"operations_total",
	I1004 01:20:21.151722  151348 command_runner.go:130] > # 	"operations_latency_seconds",
	I1004 01:20:21.151732  151348 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1004 01:20:21.151743  151348 command_runner.go:130] > # 	"operations_errors_total",
	I1004 01:20:21.151754  151348 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1004 01:20:21.151766  151348 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1004 01:20:21.151776  151348 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1004 01:20:21.151787  151348 command_runner.go:130] > # 	"image_pulls_success_total",
	I1004 01:20:21.151798  151348 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1004 01:20:21.151806  151348 command_runner.go:130] > # 	"containers_oom_count_total",
	I1004 01:20:21.151811  151348 command_runner.go:130] > # ]
	I1004 01:20:21.151817  151348 command_runner.go:130] > # The port on which the metrics server will listen.
	I1004 01:20:21.151823  151348 command_runner.go:130] > # metrics_port = 9090
	I1004 01:20:21.151829  151348 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1004 01:20:21.151835  151348 command_runner.go:130] > # metrics_socket = ""
	I1004 01:20:21.151840  151348 command_runner.go:130] > # The certificate for the secure metrics server.
	I1004 01:20:21.151848  151348 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1004 01:20:21.151857  151348 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1004 01:20:21.151864  151348 command_runner.go:130] > # certificate on any modification event.
	I1004 01:20:21.151868  151348 command_runner.go:130] > # metrics_cert = ""
	I1004 01:20:21.151874  151348 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1004 01:20:21.151879  151348 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1004 01:20:21.151885  151348 command_runner.go:130] > # metrics_key = ""
	I1004 01:20:21.151891  151348 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1004 01:20:21.151897  151348 command_runner.go:130] > [crio.tracing]
	I1004 01:20:21.151902  151348 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1004 01:20:21.151908  151348 command_runner.go:130] > # enable_tracing = false
	I1004 01:20:21.151914  151348 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1004 01:20:21.151920  151348 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1004 01:20:21.151925  151348 command_runner.go:130] > # Number of samples to collect per million spans.
	I1004 01:20:21.151932  151348 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1004 01:20:21.151938  151348 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1004 01:20:21.151944  151348 command_runner.go:130] > [crio.stats]
	I1004 01:20:21.151950  151348 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1004 01:20:21.151961  151348 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1004 01:20:21.151972  151348 command_runner.go:130] > # stats_collection_period = 0
	I1004 01:20:21.152012  151348 command_runner.go:130] ! time="2023-10-04 01:20:21.135142724Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1004 01:20:21.152025  151348 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1004 01:20:21.152084  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:20:21.152092  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:20:21.152102  151348 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:20:21.152122  151348 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.44 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-038823 NodeName:multinode-038823-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.212"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:20:21.152278  151348 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-038823-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.212"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:20:21.152406  151348 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-038823-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:20:21.152480  151348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:20:21.161735  151348 command_runner.go:130] > kubeadm
	I1004 01:20:21.161762  151348 command_runner.go:130] > kubectl
	I1004 01:20:21.161767  151348 command_runner.go:130] > kubelet
	I1004 01:20:21.161792  151348 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:20:21.161857  151348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1004 01:20:21.170424  151348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1004 01:20:21.188127  151348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:20:21.205080  151348 ssh_runner.go:195] Run: grep 192.168.39.212	control-plane.minikube.internal$ /etc/hosts
	I1004 01:20:21.209257  151348 command_runner.go:130] > 192.168.39.212	control-plane.minikube.internal
	I1004 01:20:21.209356  151348 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:20:21.209665  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:20:21.209708  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:20:21.209667  151348 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:20:21.224963  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41497
	I1004 01:20:21.225426  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:20:21.225832  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:20:21.225871  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:20:21.226195  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:20:21.226406  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:20:21.226556  151348 start.go:304] JoinCluster: &{Name:multinode-038823 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:multinode-038823 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.212 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.181 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:20:21.226664  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1004 01:20:21.226678  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:20:21.229296  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:20:21.229705  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:20:21.229741  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:20:21.229896  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:20:21.230095  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:20:21.230249  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:20:21.230378  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:20:21.433136  151348 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e3z02o.7vxaf7sdv1h62935 --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:20:21.433191  151348 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1004 01:20:21.433231  151348 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:20:21.433554  151348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:20:21.433602  151348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:20:21.448797  151348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39921
	I1004 01:20:21.449293  151348 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:20:21.449802  151348 main.go:141] libmachine: Using API Version  1
	I1004 01:20:21.449823  151348 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:20:21.450202  151348 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:20:21.450412  151348 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:20:21.450614  151348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-038823-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1004 01:20:21.450645  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:20:21.453895  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:20:21.454395  151348 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:16:15 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:20:21.454430  151348 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:20:21.454693  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:20:21.454882  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:20:21.455048  151348 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:20:21.455182  151348 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:20:21.657078  151348 command_runner.go:130] > node/multinode-038823-m03 cordoned
	I1004 01:20:24.695032  151348 command_runner.go:130] > pod "busybox-5bc68d56bd-tkn7n" has DeletionTimestamp older than 1 seconds, skipping
	I1004 01:20:24.695069  151348 command_runner.go:130] > node/multinode-038823-m03 drained
	I1004 01:20:24.696771  151348 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1004 01:20:24.696801  151348 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-zg29t, kube-system/kube-proxy-psqss
	I1004 01:20:24.696838  151348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl drain multinode-038823-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.246192488s)
	I1004 01:20:24.696862  151348 node.go:108] successfully drained node "m03"
	I1004 01:20:24.697290  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:20:24.697554  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:20:24.697950  151348 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1004 01:20:24.698016  151348 round_trippers.go:463] DELETE https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:20:24.698029  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:24.698041  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:24.698081  151348 round_trippers.go:473]     Content-Type: application/json
	I1004 01:20:24.698091  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:24.710613  151348 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1004 01:20:24.711888  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:24.711903  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:24.711911  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:24.711918  151348 round_trippers.go:580]     Content-Length: 171
	I1004 01:20:24.711926  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:24 GMT
	I1004 01:20:24.711936  151348 round_trippers.go:580]     Audit-Id: 9eda8797-49e4-41ea-80ef-bc30d9371dc7
	I1004 01:20:24.711945  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:24.711954  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:24.711983  151348 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-038823-m03","kind":"nodes","uid":"aecf3685-48bc-4468-b845-c7c671e5cd13"}}
	I1004 01:20:24.712025  151348 node.go:124] successfully deleted node "m03"
	I1004 01:20:24.712034  151348 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1004 01:20:24.712057  151348 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1004 01:20:24.712077  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e3z02o.7vxaf7sdv1h62935 --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-038823-m03"
	I1004 01:20:24.771091  151348 command_runner.go:130] > [preflight] Running pre-flight checks
	I1004 01:20:24.935506  151348 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1004 01:20:24.935605  151348 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1004 01:20:24.999644  151348 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:20:24.999674  151348 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:20:24.999680  151348 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1004 01:20:25.152226  151348 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1004 01:20:25.681656  151348 command_runner.go:130] > This node has joined the cluster:
	I1004 01:20:25.681681  151348 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1004 01:20:25.681687  151348 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1004 01:20:25.681694  151348 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1004 01:20:25.684639  151348 command_runner.go:130] ! W1004 01:20:24.764663    2337 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1004 01:20:25.684667  151348 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1004 01:20:25.684677  151348 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1004 01:20:25.684690  151348 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1004 01:20:25.684853  151348 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1004 01:20:25.979781  151348 start.go:306] JoinCluster complete in 4.753212666s
	I1004 01:20:25.979813  151348 cni.go:84] Creating CNI manager for ""
	I1004 01:20:25.979820  151348 cni.go:136] 3 nodes found, recommending kindnet
	I1004 01:20:25.979878  151348 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 01:20:25.985536  151348 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1004 01:20:25.985564  151348 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1004 01:20:25.985575  151348 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1004 01:20:25.985582  151348 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1004 01:20:25.985588  151348 command_runner.go:130] > Access: 2023-10-04 01:16:15.620741754 +0000
	I1004 01:20:25.985595  151348 command_runner.go:130] > Modify: 2023-09-19 00:07:45.000000000 +0000
	I1004 01:20:25.985600  151348 command_runner.go:130] > Change: 2023-10-04 01:16:13.762741754 +0000
	I1004 01:20:25.985604  151348 command_runner.go:130] >  Birth: -
	I1004 01:20:25.985835  151348 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 01:20:25.985871  151348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1004 01:20:26.003338  151348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 01:20:26.367314  151348 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:20:26.372080  151348 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1004 01:20:26.374570  151348 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1004 01:20:26.385080  151348 command_runner.go:130] > daemonset.apps/kindnet configured
	I1004 01:20:26.387874  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:20:26.388112  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:20:26.388457  151348 round_trippers.go:463] GET https://192.168.39.212:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1004 01:20:26.388470  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.388479  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.388484  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.391216  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:20:26.391232  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.391238  151348 round_trippers.go:580]     Audit-Id: d356083d-d734-4411-a489-962f9eb3b6bb
	I1004 01:20:26.391243  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.391248  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.391253  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.391259  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.391264  151348 round_trippers.go:580]     Content-Length: 291
	I1004 01:20:26.391269  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.391375  151348 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"968d331b-387f-4038-90f4-a22eadfc502a","resourceVersion":"901","creationTimestamp":"2023-10-04T01:06:23Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1004 01:20:26.391466  151348 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-038823" context rescaled to 1 replicas
	I1004 01:20:26.391491  151348 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.44 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime: ControlPlane:false Worker:true}
	I1004 01:20:26.393304  151348 out.go:177] * Verifying Kubernetes components...
	I1004 01:20:26.394824  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:20:26.408235  151348 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:20:26.408443  151348 kapi.go:59] client config for multinode-038823: &rest.Config{Host:"https://192.168.39.212:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/multinode-038823/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:20:26.408677  151348 node_ready.go:35] waiting up to 6m0s for node "multinode-038823-m03" to be "Ready" ...
	I1004 01:20:26.408743  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:20:26.408751  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.408759  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.408768  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.412842  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:20:26.412860  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.412867  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.412872  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.412877  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.412885  151348 round_trippers.go:580]     Audit-Id: 3cdbf1d6-6b18-4eab-a8f2-282238cfee24
	I1004 01:20:26.412893  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.412902  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.413006  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m03","uid":"c27c1c5e-d647-420e-99d5-0699ba344b3d","resourceVersion":"1237","creationTimestamp":"2023-10-04T01:20:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:20:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:20:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1004 01:20:26.413346  151348 node_ready.go:49] node "multinode-038823-m03" has status "Ready":"True"
	I1004 01:20:26.413364  151348 node_ready.go:38] duration metric: took 4.671434ms waiting for node "multinode-038823-m03" to be "Ready" ...
	I1004 01:20:26.413376  151348 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:20:26.413450  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods
	I1004 01:20:26.413473  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.413484  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.413513  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.420316  151348 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1004 01:20:26.420332  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.420339  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.420345  151348 round_trippers.go:580]     Audit-Id: 881c9253-66e3-4c2f-9b92-d55866fffcf3
	I1004 01:20:26.420350  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.420355  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.420360  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.420365  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.423162  151348 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1245"},"items":[{"metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82079 chars]
	I1004 01:20:26.425604  151348 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.425679  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-xbln6
	I1004 01:20:26.425689  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.425697  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.425703  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.428804  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:26.428822  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.428832  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.428841  151348 round_trippers.go:580]     Audit-Id: 22e93230-a19b-4b49-b680-841169f206ec
	I1004 01:20:26.428851  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.428859  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.428867  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.428873  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.429567  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-xbln6","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"956d98ac-25cb-4d19-a9c7-c3a9682eff67","resourceVersion":"897","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"be3cae0c-a682-4de1-a805-e3fea4573557","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"be3cae0c-a682-4de1-a805-e3fea4573557\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I1004 01:20:26.430005  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:26.430020  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.430030  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.430039  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.434183  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:20:26.434198  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.434204  151348 round_trippers.go:580]     Audit-Id: d9858275-e227-4c9d-959e-bbb0232c903f
	I1004 01:20:26.434210  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.434218  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.434230  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.434244  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.434250  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.435186  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:20:26.435547  151348 pod_ready.go:92] pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:26.435565  151348 pod_ready.go:81] duration metric: took 9.939685ms waiting for pod "coredns-5dd5756b68-xbln6" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.435589  151348 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.435659  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-038823
	I1004 01:20:26.435670  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.435681  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.435692  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.440120  151348 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1004 01:20:26.440142  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.440152  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.440159  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.440167  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.440175  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.440184  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.440193  151348 round_trippers.go:580]     Audit-Id: ffd2b1cc-5d52-4ff8-b4a5-36ba0ea10b4a
	I1004 01:20:26.440299  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-038823","namespace":"kube-system","uid":"040d1cb8-2a9c-42f5-bfaa-ca4f4e854c13","resourceVersion":"865","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.212:2379","kubernetes.io/config.hash":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.mirror":"abbd5cd3d9bffaa87ea4e38964623ffd","kubernetes.io/config.seen":"2023-10-04T01:06:24.071709550Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I1004 01:20:26.440654  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:26.440668  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.440678  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.440686  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.444375  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:26.444391  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.444399  151348 round_trippers.go:580]     Audit-Id: 3272f45e-cd35-4a84-9b66-21693dc066a7
	I1004 01:20:26.444409  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.444417  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.444425  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.444433  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.444442  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.444587  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:20:26.444918  151348 pod_ready.go:92] pod "etcd-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:26.444936  151348 pod_ready.go:81] duration metric: took 9.33444ms waiting for pod "etcd-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.444962  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.445030  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-038823
	I1004 01:20:26.445040  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.445052  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.445065  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.447078  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:20:26.447093  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.447100  151348 round_trippers.go:580]     Audit-Id: 6ce8906c-e388-4198-b3a1-1c1b9f3149a2
	I1004 01:20:26.447109  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.447117  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.447127  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.447140  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.447148  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.447283  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-038823","namespace":"kube-system","uid":"8f46d14f-fac3-4029-af40-ad242d6e93e1","resourceVersion":"876","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.212:8443","kubernetes.io/config.hash":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.mirror":"f34f143a5b95a664a6f0b6f04bfc8d7d","kubernetes.io/config.seen":"2023-10-04T01:06:24.071714521Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I1004 01:20:26.447690  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:26.447706  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.447716  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.447725  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.449744  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:20:26.449757  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.449763  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.449771  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.449779  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.449792  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.449804  151348 round_trippers.go:580]     Audit-Id: 04abe206-b34f-4f8a-8e26-4f59b723ce11
	I1004 01:20:26.449813  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.449973  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:20:26.450340  151348 pod_ready.go:92] pod "kube-apiserver-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:26.450356  151348 pod_ready.go:81] duration metric: took 5.383795ms waiting for pod "kube-apiserver-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.450365  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.450417  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-038823
	I1004 01:20:26.450425  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.450432  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.450440  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.452289  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:20:26.452318  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.452329  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.452337  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.452345  151348 round_trippers.go:580]     Audit-Id: 95e2503c-2908-4507-816c-1da84c159990
	I1004 01:20:26.452353  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.452361  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.452369  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.452516  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-038823","namespace":"kube-system","uid":"ace8ff54-191a-4969-bc58-ad0440f25084","resourceVersion":"816","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.mirror":"aa1e06ef6f8d813f998c818f0bbb8da2","kubernetes.io/config.seen":"2023-10-04T01:06:24.071715949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I1004 01:20:26.452977  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:26.452994  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.453004  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.453013  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.454804  151348 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1004 01:20:26.454816  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.454822  151348 round_trippers.go:580]     Audit-Id: 73540ba1-1112-499c-9f74-4542e2765c04
	I1004 01:20:26.454827  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.454831  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.454837  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.454841  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.454847  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.455084  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:20:26.455370  151348 pod_ready.go:92] pod "kube-controller-manager-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:26.455384  151348 pod_ready.go:81] duration metric: took 5.011676ms waiting for pod "kube-controller-manager-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.455392  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.609689  151348 request.go:629] Waited for 154.239204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:20:26.609759  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hgg2z
	I1004 01:20:26.609766  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.609777  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.609786  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.612940  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:26.612970  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.612981  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.612989  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.612998  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.613006  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.613014  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.613023  151348 round_trippers.go:580]     Audit-Id: 18e19913-7a0a-49cb-baed-9a749f4ebdaa
	I1004 01:20:26.613331  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-hgg2z","generateName":"kube-proxy-","namespace":"kube-system","uid":"28d3f9c9-4eb8-4c36-81b0-1726a87d20a6","resourceVersion":"1075","creationTimestamp":"2023-10-04T01:07:18Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:07:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1004 01:20:26.809307  151348 request.go:629] Waited for 195.405341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:20:26.809382  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m02
	I1004 01:20:26.809387  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:26.809395  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:26.809406  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:26.812219  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:20:26.812243  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:26.812254  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:26.812264  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:26.812273  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:26.812282  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:26 GMT
	I1004 01:20:26.812294  151348 round_trippers.go:580]     Audit-Id: 86a3c595-01b4-492a-a1e3-affaf40025a7
	I1004 01:20:26.812319  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:26.812512  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m02","uid":"a4da8b57-0ac8-4804-bc46-62830c7335ea","resourceVersion":"1059","creationTimestamp":"2023-10-04T01:18:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:18:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1004 01:20:26.812860  151348 pod_ready.go:92] pod "kube-proxy-hgg2z" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:26.812883  151348 pod_ready.go:81] duration metric: took 357.484891ms waiting for pod "kube-proxy-hgg2z" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:26.812901  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:27.009349  151348 request.go:629] Waited for 196.361902ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:20:27.009443  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:20:27.009455  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:27.009468  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:27.009483  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:27.012429  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:20:27.012460  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:27.012471  151348 round_trippers.go:580]     Audit-Id: 6a9049a8-1291-40a3-9b82-deee77c2c8b2
	I1004 01:20:27.012479  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:27.012487  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:27.012496  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:27.012503  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:27.012511  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:27 GMT
	I1004 01:20:27.012830  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psqss","generateName":"kube-proxy-","namespace":"kube-system","uid":"455f6f13-5661-4b4e-847b-9266e44c03d8","resourceVersion":"1242","creationTimestamp":"2023-10-04T01:08:09Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5883 chars]
	I1004 01:20:27.209665  151348 request.go:629] Waited for 196.365859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:20:27.209727  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:20:27.209733  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:27.209744  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:27.209753  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:27.213259  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:27.213288  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:27.213299  151348 round_trippers.go:580]     Audit-Id: c86ee261-fec2-472e-96c7-e08bd991741b
	I1004 01:20:27.213313  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:27.213322  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:27.213330  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:27.213336  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:27.213345  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:27 GMT
	I1004 01:20:27.214066  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m03","uid":"c27c1c5e-d647-420e-99d5-0699ba344b3d","resourceVersion":"1237","creationTimestamp":"2023-10-04T01:20:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:20:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:20:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1004 01:20:27.408780  151348 request.go:629] Waited for 194.297634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:20:27.408845  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-psqss
	I1004 01:20:27.408850  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:27.408859  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:27.408865  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:27.411988  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:27.412020  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:27.412029  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:27.412036  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:27.412045  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:27 GMT
	I1004 01:20:27.412053  151348 round_trippers.go:580]     Audit-Id: 61a4ab48-e167-4452-b769-f5b9e4f97f43
	I1004 01:20:27.412061  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:27.412070  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:27.412295  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-psqss","generateName":"kube-proxy-","namespace":"kube-system","uid":"455f6f13-5661-4b4e-847b-9266e44c03d8","resourceVersion":"1255","creationTimestamp":"2023-10-04T01:08:09Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:08:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1004 01:20:27.609145  151348 request.go:629] Waited for 196.399704ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:20:27.609224  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823-m03
	I1004 01:20:27.609232  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:27.609244  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:27.609258  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:27.612746  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:27.612773  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:27.612781  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:27.612790  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:27.612798  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:27 GMT
	I1004 01:20:27.612808  151348 round_trippers.go:580]     Audit-Id: dbd1cbcf-7e9a-46b0-82ff-c4a918d24e60
	I1004 01:20:27.612824  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:27.612836  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:27.613135  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823-m03","uid":"c27c1c5e-d647-420e-99d5-0699ba344b3d","resourceVersion":"1237","creationTimestamp":"2023-10-04T01:20:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:20:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:20:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1004 01:20:27.613419  151348 pod_ready.go:92] pod "kube-proxy-psqss" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:27.613436  151348 pod_ready.go:81] duration metric: took 800.527738ms waiting for pod "kube-proxy-psqss" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:27.613445  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:27.809657  151348 request.go:629] Waited for 196.105375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:20:27.809733  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pz9j4
	I1004 01:20:27.809741  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:27.809753  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:27.809763  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:27.813262  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:27.813286  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:27.813296  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:27 GMT
	I1004 01:20:27.813305  151348 round_trippers.go:580]     Audit-Id: ccca4bb0-4548-47e8-9c01-bb98199d9f7e
	I1004 01:20:27.813313  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:27.813322  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:27.813329  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:27.813336  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:27.813680  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pz9j4","generateName":"kube-proxy-","namespace":"kube-system","uid":"36f00e2f-5611-43ae-94b5-d9dde6784128","resourceVersion":"791","creationTimestamp":"2023-10-04T01:06:36Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"1217f67b-200a-4eda-8318-ce51dd6b9288","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1217f67b-200a-4eda-8318-ce51dd6b9288\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1004 01:20:28.009537  151348 request.go:629] Waited for 195.409885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:28.009619  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:28.009625  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:28.009638  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:28.009652  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:28.013227  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:28.013248  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:28.013255  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:28.013262  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:28 GMT
	I1004 01:20:28.013270  151348 round_trippers.go:580]     Audit-Id: 4750efbd-e519-4b1b-850d-776d373e1f2c
	I1004 01:20:28.013278  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:28.013288  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:28.013296  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:28.014032  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:20:28.014356  151348 pod_ready.go:92] pod "kube-proxy-pz9j4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:28.014374  151348 pod_ready.go:81] duration metric: took 400.923197ms waiting for pod "kube-proxy-pz9j4" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:28.014386  151348 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:28.209770  151348 request.go:629] Waited for 195.278631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:20:28.209830  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-038823
	I1004 01:20:28.209835  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:28.209853  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:28.209859  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:28.213306  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:28.213324  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:28.213331  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:28.213338  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:28.213347  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:28.213361  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:28 GMT
	I1004 01:20:28.213370  151348 round_trippers.go:580]     Audit-Id: 0c081faa-0da3-4cfd-bf32-5c56c56b784f
	I1004 01:20:28.213380  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:28.213579  151348 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-038823","namespace":"kube-system","uid":"2da95c67-ae74-41db-a746-455fa043f9a7","resourceVersion":"889","creationTimestamp":"2023-10-04T01:06:24Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.mirror":"c91c3e6ceaa71afd2dcd89a3b0d10076","kubernetes.io/config.seen":"2023-10-04T01:06:24.071717021Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-04T01:06:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I1004 01:20:28.409345  151348 request.go:629] Waited for 195.380067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:28.409410  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes/multinode-038823
	I1004 01:20:28.409415  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:28.409422  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:28.409428  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:28.412526  151348 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1004 01:20:28.412550  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:28.412559  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:28.412567  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:28.412576  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:28.412589  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:28.412594  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:28 GMT
	I1004 01:20:28.412599  151348 round_trippers.go:580]     Audit-Id: 10a6f748-4e5f-43e9-bdcc-e972f29899cd
	I1004 01:20:28.413012  151348 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-04T01:06:20Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I1004 01:20:28.413347  151348 pod_ready.go:92] pod "kube-scheduler-multinode-038823" in "kube-system" namespace has status "Ready":"True"
	I1004 01:20:28.413365  151348 pod_ready.go:81] duration metric: took 398.962541ms waiting for pod "kube-scheduler-multinode-038823" in "kube-system" namespace to be "Ready" ...
	I1004 01:20:28.413380  151348 pod_ready.go:38] duration metric: took 1.999991832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:20:28.413408  151348 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:20:28.413460  151348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:20:28.427424  151348 system_svc.go:56] duration metric: took 14.011536ms WaitForService to wait for kubelet.
	I1004 01:20:28.427451  151348 kubeadm.go:581] duration metric: took 2.035934822s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:20:28.427472  151348 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:20:28.608810  151348 request.go:629] Waited for 181.267359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.212:8443/api/v1/nodes
	I1004 01:20:28.608884  151348 round_trippers.go:463] GET https://192.168.39.212:8443/api/v1/nodes
	I1004 01:20:28.608890  151348 round_trippers.go:469] Request Headers:
	I1004 01:20:28.608898  151348 round_trippers.go:473]     Accept: application/json, */*
	I1004 01:20:28.608904  151348 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1004 01:20:28.611762  151348 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1004 01:20:28.611788  151348 round_trippers.go:577] Response Headers:
	I1004 01:20:28.611799  151348 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: bf55d5be-6e6f-4ae0-ab49-7a30ce8a66e6
	I1004 01:20:28.611804  151348 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: fb9f694d-aee2-46cb-a574-b631a3e54e72
	I1004 01:20:28.611810  151348 round_trippers.go:580]     Date: Wed, 04 Oct 2023 01:20:28 GMT
	I1004 01:20:28.611815  151348 round_trippers.go:580]     Audit-Id: 6ed51507-e6dc-4f01-9261-d3d57651dec3
	I1004 01:20:28.611820  151348 round_trippers.go:580]     Cache-Control: no-cache, private
	I1004 01:20:28.611824  151348 round_trippers.go:580]     Content-Type: application/json
	I1004 01:20:28.612350  151348 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1261"},"items":[{"metadata":{"name":"multinode-038823","uid":"c9313c3a-6265-4e9f-9937-ae21d8c462e9","resourceVersion":"925","creationTimestamp":"2023-10-04T01:06:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-038823","kubernetes.io/os":"linux","minikube.k8s.io/commit":"cacb4070dc820e9f8fe7f94a5c041e95e45c32b1","minikube.k8s.io/name":"multinode-038823","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_04T01_06_25_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15134 chars]
	I1004 01:20:28.612904  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:20:28.612924  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:20:28.612934  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:20:28.612938  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:20:28.612942  151348 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:20:28.612945  151348 node_conditions.go:123] node cpu capacity is 2
	I1004 01:20:28.612949  151348 node_conditions.go:105] duration metric: took 185.472386ms to run NodePressure ...
	I1004 01:20:28.612959  151348 start.go:228] waiting for startup goroutines ...
	I1004 01:20:28.612977  151348 start.go:242] writing updated cluster config ...
	I1004 01:20:28.613255  151348 ssh_runner.go:195] Run: rm -f paused
	I1004 01:20:28.664947  151348 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:20:28.668179  151348 out.go:177] * Done! kubectl is now configured to use "multinode-038823" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:16:14 UTC, ends at Wed 2023-10-04 01:20:30 UTC. --
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.913495136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696382429913480108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8f66a5c4-46a7-474f-ade2-40ae6978d275 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.914688471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=639802f7-e72f-4faa-bf2c-9eb08a189ef4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.914828487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=639802f7-e72f-4faa-bf2c-9eb08a189ef4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.915034596Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4e2b0610eacb152eb43853ee6159bf806c7fbd03436afafe9da5cd1a5b5ccf8,PodSandboxId:5fe50617e2c74e964fb75f579c149882f0cd93e531e2d21d16dd994e624efb69,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696382225363919680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5744ceaf322fbb5084efa9ef9b92cd38d3450475554687fc47a7d891088bba1,PodSandboxId:95a1101f8f537c3c48b6df344c5d19169e5afacc4f222b49b3e680bce7ac489f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696382223310364445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52111b2ba07eae52360d2d5609fb14adf412aad7e30a27aefece35fe3e47297,PodSandboxId:ef362ab5f3c94769ba9c33ec34dcc3da3eec4e9bbf17bb1430941f47585c3cf4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696382210367717525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3485e173f869264bb411875f3de9f9a02c32a840d968c1ca3cac6357124868f1,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696382209022281342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7bacef805a9ba7f922fd78bfa78b615bc6f33750b65583932bd936cad23913,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696382207825659336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a31de4acfc5f7c925b7e536c82936ed8c596b7f39a99da80dff5ee4cfc0f402,PodSandboxId:92bac8c312e5aa549baeb060e83e7f0bc67bb5955d3af5f37ee75f2912e6abc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696382207853621358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde678
4128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70d76cf7a06eebbc454472c13a3f50c14527900015e29b110809944e2b79e96,PodSandboxId:c52887e733733f1023f06e398dafc9dcf4054bc49da10e2ff08849bb4d9c6be6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696382201470077079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334350a53aa473fa6f346becd78411dae9abb8edecfcc3057208c2e942c7cb99,PodSandboxId:7bf520e395f457555cd674f963c467d1fa9ad2990d0f20c4e920ec322896c583,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696382201138138995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.has
h: 18868ac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbccc83cf93ea1b075e73301ae50f3b5845952ba2c762d539df8ddf82354a7d1,PodSandboxId:76f4bdf28e00611f388553a6bc2265ffb2932428cadf92b26c459e86ae4a0d1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696382201078838311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes.container.hash: a2e1edd4,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5112089941d0ff1428e10243b4aa0e9a3724db5e2b118a26f206e18ce661dcd1,PodSandboxId:ea07466f43d19ad53478decd9d5f1d93d79b9d699730a7491b700650999f5906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696382200984957875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=639802f7-e72f-4faa-bf2c-9eb08a189ef4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.936251659Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aabd59b7-4eee-41b9-8e17-9c5c0ed6faf2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.936480173Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5fe50617e2c74e964fb75f579c149882f0cd93e531e2d21d16dd994e624efb69,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-ckxb4,Uid:0a2cc02b-be6a-4874-be28-422aa6bcbd21,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382222864285080,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:16:46.891742672Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:95a1101f8f537c3c48b6df344c5d19169e5afacc4f222b49b3e680bce7ac489f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-xbln6,Uid:956d98ac-25cb-4d19-a9c7-c3a9682eff67,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1696382222661368758,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:16:46.891809022Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:b4bd2f00-0b17-47da-add0-486f8232ea80,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382207256022271,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-04T01:16:46.891737510Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92bac8c312e5aa549baeb060e83e7f0bc67bb5955d3af5f37ee75f2912e6abc8,Metadata:&PodSandboxMetadata{Name:kube-proxy-pz9j4,Uid:36f00e2f-5611-43ae-94b5-d9dde6784128,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1696382207250179712,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde6784128,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:16:46.891811336Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef362ab5f3c94769ba9c33ec34dcc3da3eec4e9bbf17bb1430941f47585c3cf4,Metadata:&PodSandboxMetadata{Name:kindnet-prsst,Uid:1775280f-c3e2-4162-9287-9b58a90c8f83,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382207218386048,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,k8s-app: kindnet,pod-template-genera
tion: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:16:46.891804068Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ea07466f43d19ad53478decd9d5f1d93d79b9d699730a7491b700650999f5906,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-038823,Uid:aa1e06ef6f8d813f998c818f0bbb8da2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382200424619348,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: aa1e06ef6f8d813f998c818f0bbb8da2,kubernetes.io/config.seen: 2023-10-04T01:16:39.888868332Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:76f4bdf28e00611f388553a6bc2265ffb2932428cadf92b26c459e86ae4a0d1a,Metadata:&PodSandboxMetada
ta{Name:kube-apiserver-multinode-038823,Uid:f34f143a5b95a664a6f0b6f04bfc8d7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382200420914449,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.212:8443,kubernetes.io/config.hash: f34f143a5b95a664a6f0b6f04bfc8d7d,kubernetes.io/config.seen: 2023-10-04T01:16:39.888873517Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c52887e733733f1023f06e398dafc9dcf4054bc49da10e2ff08849bb4d9c6be6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-038823,Uid:c91c3e6ceaa71afd2dcd89a3b0d10076,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382200416821191,Labels:map[string]string{component: kube-schedule
r,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c91c3e6ceaa71afd2dcd89a3b0d10076,kubernetes.io/config.seen: 2023-10-04T01:16:39.888871835Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7bf520e395f457555cd674f963c467d1fa9ad2990d0f20c4e920ec322896c583,Metadata:&PodSandboxMetadata{Name:etcd-multinode-038823,Uid:abbd5cd3d9bffaa87ea4e38964623ffd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696382200375598427,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.212:2379,kubern
etes.io/config.hash: abbd5cd3d9bffaa87ea4e38964623ffd,kubernetes.io/config.seen: 2023-10-04T01:16:39.888872743Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=aabd59b7-4eee-41b9-8e17-9c5c0ed6faf2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.937350693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a28b79b7-076b-45ac-8288-63985df7e845 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.937438323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a28b79b7-076b-45ac-8288-63985df7e845 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.937814696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4e2b0610eacb152eb43853ee6159bf806c7fbd03436afafe9da5cd1a5b5ccf8,PodSandboxId:5fe50617e2c74e964fb75f579c149882f0cd93e531e2d21d16dd994e624efb69,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696382225363919680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5744ceaf322fbb5084efa9ef9b92cd38d3450475554687fc47a7d891088bba1,PodSandboxId:95a1101f8f537c3c48b6df344c5d19169e5afacc4f222b49b3e680bce7ac489f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696382223310364445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52111b2ba07eae52360d2d5609fb14adf412aad7e30a27aefece35fe3e47297,PodSandboxId:ef362ab5f3c94769ba9c33ec34dcc3da3eec4e9bbf17bb1430941f47585c3cf4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696382210367717525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3485e173f869264bb411875f3de9f9a02c32a840d968c1ca3cac6357124868f1,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696382209022281342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a31de4acfc5f7c925b7e536c82936ed8c596b7f39a99da80dff5ee4cfc0f402,PodSandboxId:92bac8c312e5aa549baeb060e83e7f0bc67bb5955d3af5f37ee75f2912e6abc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696382207853621358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde6
784128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70d76cf7a06eebbc454472c13a3f50c14527900015e29b110809944e2b79e96,PodSandboxId:c52887e733733f1023f06e398dafc9dcf4054bc49da10e2ff08849bb4d9c6be6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696382201470077079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Ann
otations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334350a53aa473fa6f346becd78411dae9abb8edecfcc3057208c2e942c7cb99,PodSandboxId:7bf520e395f457555cd674f963c467d1fa9ad2990d0f20c4e920ec322896c583,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696382201138138995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.h
ash: 18868ac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbccc83cf93ea1b075e73301ae50f3b5845952ba2c762d539df8ddf82354a7d1,PodSandboxId:76f4bdf28e00611f388553a6bc2265ffb2932428cadf92b26c459e86ae4a0d1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696382201078838311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes.container.hash: a2e1edd
4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5112089941d0ff1428e10243b4aa0e9a3724db5e2b118a26f206e18ce661dcd1,PodSandboxId:ea07466f43d19ad53478decd9d5f1d93d79b9d699730a7491b700650999f5906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696382200984957875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{io.kubernetes
.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a28b79b7-076b-45ac-8288-63985df7e845 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.968153025Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bcb9d04c-fec1-4656-b676-c1719b4636ac name=/runtime.v1.RuntimeService/Version
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.968242094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bcb9d04c-fec1-4656-b676-c1719b4636ac name=/runtime.v1.RuntimeService/Version
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.969683960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=956e9c05-4a79-4fae-8475-f31994c1b05d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.970177460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696382429970160875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=956e9c05-4a79-4fae-8475-f31994c1b05d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.971022413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30cb12ef-041b-400f-88d2-137c4821252b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.971073226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30cb12ef-041b-400f-88d2-137c4821252b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:29 multinode-038823 crio[709]: time="2023-10-04 01:20:29.971287247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4e2b0610eacb152eb43853ee6159bf806c7fbd03436afafe9da5cd1a5b5ccf8,PodSandboxId:5fe50617e2c74e964fb75f579c149882f0cd93e531e2d21d16dd994e624efb69,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696382225363919680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5744ceaf322fbb5084efa9ef9b92cd38d3450475554687fc47a7d891088bba1,PodSandboxId:95a1101f8f537c3c48b6df344c5d19169e5afacc4f222b49b3e680bce7ac489f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696382223310364445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52111b2ba07eae52360d2d5609fb14adf412aad7e30a27aefece35fe3e47297,PodSandboxId:ef362ab5f3c94769ba9c33ec34dcc3da3eec4e9bbf17bb1430941f47585c3cf4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696382210367717525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3485e173f869264bb411875f3de9f9a02c32a840d968c1ca3cac6357124868f1,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696382209022281342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7bacef805a9ba7f922fd78bfa78b615bc6f33750b65583932bd936cad23913,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696382207825659336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a31de4acfc5f7c925b7e536c82936ed8c596b7f39a99da80dff5ee4cfc0f402,PodSandboxId:92bac8c312e5aa549baeb060e83e7f0bc67bb5955d3af5f37ee75f2912e6abc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696382207853621358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde678
4128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70d76cf7a06eebbc454472c13a3f50c14527900015e29b110809944e2b79e96,PodSandboxId:c52887e733733f1023f06e398dafc9dcf4054bc49da10e2ff08849bb4d9c6be6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696382201470077079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334350a53aa473fa6f346becd78411dae9abb8edecfcc3057208c2e942c7cb99,PodSandboxId:7bf520e395f457555cd674f963c467d1fa9ad2990d0f20c4e920ec322896c583,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696382201138138995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.has
h: 18868ac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbccc83cf93ea1b075e73301ae50f3b5845952ba2c762d539df8ddf82354a7d1,PodSandboxId:76f4bdf28e00611f388553a6bc2265ffb2932428cadf92b26c459e86ae4a0d1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696382201078838311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes.container.hash: a2e1edd4,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5112089941d0ff1428e10243b4aa0e9a3724db5e2b118a26f206e18ce661dcd1,PodSandboxId:ea07466f43d19ad53478decd9d5f1d93d79b9d699730a7491b700650999f5906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696382200984957875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30cb12ef-041b-400f-88d2-137c4821252b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.021240407Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=f208d72e-f17d-4256-aafa-a6e55deb8b28 name=/runtime.v1.RuntimeService/Status
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.021339992Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=f208d72e-f17d-4256-aafa-a6e55deb8b28 name=/runtime.v1.RuntimeService/Status
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.024168427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=56f13b28-18c5-4c43-aa11-5c282d79ce27 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.024241478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=56f13b28-18c5-4c43-aa11-5c282d79ce27 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.025353428Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=63f1ec50-6640-4662-a5e9-884e2e11b200 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.025863450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696382430025846835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125549,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=63f1ec50-6640-4662-a5e9-884e2e11b200 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.026571030Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a7d66c3-c2c7-473a-9029-44d859f5b394 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.026621045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a7d66c3-c2c7-473a-9029-44d859f5b394 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:20:30 multinode-038823 crio[709]: time="2023-10-04 01:20:30.026881969Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4e2b0610eacb152eb43853ee6159bf806c7fbd03436afafe9da5cd1a5b5ccf8,PodSandboxId:5fe50617e2c74e964fb75f579c149882f0cd93e531e2d21d16dd994e624efb69,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1696382225363919680,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-ckxb4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2cc02b-be6a-4874-be28-422aa6bcbd21,},Annotations:map[string]string{io.kubernetes.container.hash: 9a76707b,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5744ceaf322fbb5084efa9ef9b92cd38d3450475554687fc47a7d891088bba1,PodSandboxId:95a1101f8f537c3c48b6df344c5d19169e5afacc4f222b49b3e680bce7ac489f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696382223310364445,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xbln6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 956d98ac-25cb-4d19-a9c7-c3a9682eff67,},Annotations:map[string]string{io.kubernetes.container.hash: b64c56bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e52111b2ba07eae52360d2d5609fb14adf412aad7e30a27aefece35fe3e47297,PodSandboxId:ef362ab5f3c94769ba9c33ec34dcc3da3eec4e9bbf17bb1430941f47585c3cf4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1696382210367717525,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-prsst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod
.uid: 1775280f-c3e2-4162-9287-9b58a90c8f83,},Annotations:map[string]string{io.kubernetes.container.hash: bf81a734,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3485e173f869264bb411875f3de9f9a02c32a840d968c1ca3cac6357124868f1,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696382209022281342,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b7bacef805a9ba7f922fd78bfa78b615bc6f33750b65583932bd936cad23913,PodSandboxId:b9c1295b0b85a5b10ac4ab048a795b8d403d0355969db48b117167ae2bc4fbb5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696382207825659336,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: b4bd2f00-0b17-47da-add0-486f8232ea80,},Annotations:map[string]string{io.kubernetes.container.hash: 7f2fe799,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a31de4acfc5f7c925b7e536c82936ed8c596b7f39a99da80dff5ee4cfc0f402,PodSandboxId:92bac8c312e5aa549baeb060e83e7f0bc67bb5955d3af5f37ee75f2912e6abc8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696382207853621358,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pz9j4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36f00e2f-5611-43ae-94b5-d9dde678
4128,},Annotations:map[string]string{io.kubernetes.container.hash: d5693984,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70d76cf7a06eebbc454472c13a3f50c14527900015e29b110809944e2b79e96,PodSandboxId:c52887e733733f1023f06e398dafc9dcf4054bc49da10e2ff08849bb4d9c6be6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696382201470077079,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c91c3e6ceaa71afd2dcd89a3b0d10076,},Annot
ations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:334350a53aa473fa6f346becd78411dae9abb8edecfcc3057208c2e942c7cb99,PodSandboxId:7bf520e395f457555cd674f963c467d1fa9ad2990d0f20c4e920ec322896c583,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696382201138138995,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abbd5cd3d9bffaa87ea4e38964623ffd,},Annotations:map[string]string{io.kubernetes.container.has
h: 18868ac4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbccc83cf93ea1b075e73301ae50f3b5845952ba2c762d539df8ddf82354a7d1,PodSandboxId:76f4bdf28e00611f388553a6bc2265ffb2932428cadf92b26c459e86ae4a0d1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696382201078838311,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f34f143a5b95a664a6f0b6f04bfc8d7d,},Annotations:map[string]string{io.kubernetes.container.hash: a2e1edd4,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5112089941d0ff1428e10243b4aa0e9a3724db5e2b118a26f206e18ce661dcd1,PodSandboxId:ea07466f43d19ad53478decd9d5f1d93d79b9d699730a7491b700650999f5906,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696382200984957875,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-038823,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa1e06ef6f8d813f998c818f0bbb8da2,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a7d66c3-c2c7-473a-9029-44d859f5b394 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4e2b0610eacb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   5fe50617e2c74       busybox-5bc68d56bd-ckxb4
	b5744ceaf322f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   95a1101f8f537       coredns-5dd5756b68-xbln6
	e52111b2ba07e       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   ef362ab5f3c94       kindnet-prsst
	3485e173f8692       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   b9c1295b0b85a       storage-provisioner
	1a31de4acfc5f       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      3 minutes ago       Running             kube-proxy                1                   92bac8c312e5a       kube-proxy-pz9j4
	2b7bacef805a9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   b9c1295b0b85a       storage-provisioner
	a70d76cf7a06e       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      3 minutes ago       Running             kube-scheduler            1                   c52887e733733       kube-scheduler-multinode-038823
	334350a53aa47       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   7bf520e395f45       etcd-multinode-038823
	cbccc83cf93ea       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      3 minutes ago       Running             kube-apiserver            1                   76f4bdf28e006       kube-apiserver-multinode-038823
	5112089941d0f       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      3 minutes ago       Running             kube-controller-manager   1                   ea07466f43d19       kube-controller-manager-multinode-038823
	
	* 
	* ==> coredns [b5744ceaf322fbb5084efa9ef9b92cd38d3450475554687fc47a7d891088bba1] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49597 - 43054 "HINFO IN 2166046541615265064.9181681099782292555. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020211072s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-038823
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-038823
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=multinode-038823
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_06_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:06:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-038823
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:20:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:17:17 +0000   Wed, 04 Oct 2023 01:06:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:17:17 +0000   Wed, 04 Oct 2023 01:06:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:17:17 +0000   Wed, 04 Oct 2023 01:06:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:17:17 +0000   Wed, 04 Oct 2023 01:16:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.212
	  Hostname:    multinode-038823
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 09ba921e5b974ee499fdce1a4921eb7b
	  System UUID:                09ba921e-5b97-4ee4-99fd-ce1a4921eb7b
	  Boot ID:                    c858fc4f-d6cd-4318-897a-ae46f25f60c0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-ckxb4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-xbln6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-038823                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-prsst                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-038823             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-038823    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-pz9j4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-038823             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m41s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-038823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-038823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-038823 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-038823 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-038823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-038823 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-038823 event: Registered Node multinode-038823 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-038823 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m51s)  kubelet          Node multinode-038823 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m51s)  kubelet          Node multinode-038823 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m51s)  kubelet          Node multinode-038823 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-038823 event: Registered Node multinode-038823 in Controller
	
	
	Name:               multinode-038823-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-038823-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:18:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-038823-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:20:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:18:44 +0000   Wed, 04 Oct 2023 01:18:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:18:44 +0000   Wed, 04 Oct 2023 01:18:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:18:44 +0000   Wed, 04 Oct 2023 01:18:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:18:44 +0000   Wed, 04 Oct 2023 01:18:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.181
	  Hostname:    multinode-038823-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6d98153902ad4974b99e7d140dce28b7
	  System UUID:                6d981539-02ad-4974-b99e-7d140dce28b7
	  Boot ID:                    2bec2f70-3e73-4318-99e6-705a0876f3f6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-hln8h    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-cqczw               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-hgg2z            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 104s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet          Node multinode-038823-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet          Node multinode-038823-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet          Node multinode-038823-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet          Node multinode-038823-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m53s                  kubelet          Node multinode-038823-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m14s (x2 over 3m14s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 106s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s (x2 over 106s)    kubelet          Node multinode-038823-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s (x2 over 106s)    kubelet          Node multinode-038823-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s (x2 over 106s)    kubelet          Node multinode-038823-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  106s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                106s                   kubelet          Node multinode-038823-m02 status is now: NodeReady
	  Normal   RegisteredNode           102s                   node-controller  Node multinode-038823-m02 event: Registered Node multinode-038823-m02 in Controller
	
	
	Name:               multinode-038823-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-038823-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:20:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-038823-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:20:25 +0000   Wed, 04 Oct 2023 01:20:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:20:25 +0000   Wed, 04 Oct 2023 01:20:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:20:25 +0000   Wed, 04 Oct 2023 01:20:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:20:25 +0000   Wed, 04 Oct 2023 01:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.44
	  Hostname:    multinode-038823-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 789c7086382645abbbd886d3666e46ff
	  System UUID:                789c7086-3826-45ab-bbd8-86d3666e46ff
	  Boot ID:                    9b06054a-21b4-4dd9-a663-6d1381aca70f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-tkn7n    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kindnet-zg29t               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-psqss            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-038823-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-038823-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-038823-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-038823-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-038823-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-038823-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-038823-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-038823-m03 status is now: NodeReady
	  Normal   NodeNotReady             68s                 kubelet     Node multinode-038823-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        40s (x2 over 100s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotSchedulable       7s                  kubelet     Node multinode-038823-m03 status is now: NodeNotSchedulable
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)     kubelet     Node multinode-038823-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)     kubelet     Node multinode-038823-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)     kubelet     Node multinode-038823-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-038823-m03 status is now: NodeReady
	  Normal   Starting                 5s                  kubelet     Starting kubelet.
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071763] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.340616] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.503318] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.149273] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.680611] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.419498] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.106479] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.138145] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.102888] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.209065] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +16.693166] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[ +19.317793] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [334350a53aa473fa6f346becd78411dae9abb8edecfcc3057208c2e942c7cb99] <==
	* {"level":"info","ts":"2023-10-04T01:16:43.120538Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f8d3b95e5bbb719c","local-member-id":"eed9c28654b6490f","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:16:43.120581Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:16:43.126533Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-04T01:16:43.126636Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-10-04T01:16:43.126836Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.212:2380"}
	{"level":"info","ts":"2023-10-04T01:16:43.127293Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"eed9c28654b6490f","initial-advertise-peer-urls":["https://192.168.39.212:2380"],"listen-peer-urls":["https://192.168.39.212:2380"],"advertise-client-urls":["https://192.168.39.212:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.212:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:16:43.128057Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:16:44.703704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T01:16:44.70426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:16:44.704299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgPreVoteResp from eed9c28654b6490f at term 2"}
	{"level":"info","ts":"2023-10-04T01:16:44.704331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T01:16:44.704373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f received MsgVoteResp from eed9c28654b6490f at term 3"}
	{"level":"info","ts":"2023-10-04T01:16:44.70441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"eed9c28654b6490f became leader at term 3"}
	{"level":"info","ts":"2023-10-04T01:16:44.704436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: eed9c28654b6490f elected leader eed9c28654b6490f at term 3"}
	{"level":"info","ts":"2023-10-04T01:16:44.709072Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"eed9c28654b6490f","local-member-attributes":"{Name:multinode-038823 ClientURLs:[https://192.168.39.212:2379]}","request-path":"/0/members/eed9c28654b6490f/attributes","cluster-id":"f8d3b95e5bbb719c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:16:44.709405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:16:44.70944Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:16:44.709491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T01:16:44.70965Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:16:44.710928Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:16:44.710953Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.212:2379"}
	{"level":"info","ts":"2023-10-04T01:16:48.622228Z","caller":"traceutil/trace.go:171","msg":"trace[730861360] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"191.014631ms","start":"2023-10-04T01:16:48.431201Z","end":"2023-10-04T01:16:48.622215Z","steps":["trace[730861360] 'process raft request'  (duration: 188.917201ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:16:48.622393Z","caller":"traceutil/trace.go:171","msg":"trace[1459887777] linearizableReadLoop","detail":"{readStateIndex:826; appliedIndex:825; }","duration":"188.672656ms","start":"2023-10-04T01:16:48.433714Z","end":"2023-10-04T01:16:48.622386Z","steps":["trace[1459887777] 'read index received'  (duration: 186.323998ms)","trace[1459887777] 'applied index is now lower than readState.Index'  (duration: 2.347839ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-04T01:16:48.622466Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"188.748288ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2023-10-04T01:16:48.622496Z","caller":"traceutil/trace.go:171","msg":"trace[77990061] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:776; }","duration":"188.795357ms","start":"2023-10-04T01:16:48.433696Z","end":"2023-10-04T01:16:48.622491Z","steps":["trace[77990061] 'agreement among raft nodes before linearized reading'  (duration: 188.712169ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  01:20:30 up 4 min,  0 users,  load average: 0.51, 0.41, 0.19
	Linux multinode-038823 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [e52111b2ba07eae52360d2d5609fb14adf412aad7e30a27aefece35fe3e47297] <==
	* I1004 01:19:42.233213       1 main.go:250] Node multinode-038823-m03 has CIDR [10.244.3.0/24] 
	I1004 01:19:52.247350       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:19:52.247404       1 main.go:227] handling current node
	I1004 01:19:52.247422       1 main.go:223] Handling node with IPs: map[192.168.39.181:{}]
	I1004 01:19:52.247429       1 main.go:250] Node multinode-038823-m02 has CIDR [10.244.1.0/24] 
	I1004 01:19:52.247595       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I1004 01:19:52.247629       1 main.go:250] Node multinode-038823-m03 has CIDR [10.244.3.0/24] 
	I1004 01:20:02.252658       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:20:02.252873       1 main.go:227] handling current node
	I1004 01:20:02.252906       1 main.go:223] Handling node with IPs: map[192.168.39.181:{}]
	I1004 01:20:02.252941       1 main.go:250] Node multinode-038823-m02 has CIDR [10.244.1.0/24] 
	I1004 01:20:02.253148       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I1004 01:20:02.253172       1 main.go:250] Node multinode-038823-m03 has CIDR [10.244.3.0/24] 
	I1004 01:20:12.266476       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:20:12.266524       1 main.go:227] handling current node
	I1004 01:20:12.266542       1 main.go:223] Handling node with IPs: map[192.168.39.181:{}]
	I1004 01:20:12.266548       1 main.go:250] Node multinode-038823-m02 has CIDR [10.244.1.0/24] 
	I1004 01:20:12.266640       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I1004 01:20:12.266644       1 main.go:250] Node multinode-038823-m03 has CIDR [10.244.3.0/24] 
	I1004 01:20:22.277394       1 main.go:223] Handling node with IPs: map[192.168.39.212:{}]
	I1004 01:20:22.277451       1 main.go:227] handling current node
	I1004 01:20:22.277463       1 main.go:223] Handling node with IPs: map[192.168.39.181:{}]
	I1004 01:20:22.277469       1 main.go:250] Node multinode-038823-m02 has CIDR [10.244.1.0/24] 
	I1004 01:20:22.277576       1 main.go:223] Handling node with IPs: map[192.168.39.44:{}]
	I1004 01:20:22.277609       1 main.go:250] Node multinode-038823-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [cbccc83cf93ea1b075e73301ae50f3b5845952ba2c762d539df8ddf82354a7d1] <==
	* I1004 01:16:46.163186       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1004 01:16:46.167385       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1004 01:16:46.167513       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1004 01:16:46.268485       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1004 01:16:46.270903       1 aggregator.go:166] initial CRD sync complete...
	I1004 01:16:46.270940       1 autoregister_controller.go:141] Starting autoregister controller
	I1004 01:16:46.270947       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 01:16:46.270954       1 cache.go:39] Caches are synced for autoregister controller
	I1004 01:16:46.291643       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1004 01:16:46.295855       1 shared_informer.go:318] Caches are synced for configmaps
	I1004 01:16:46.295990       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1004 01:16:46.296016       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1004 01:16:46.296579       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 01:16:46.301046       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1004 01:16:46.318330       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 01:16:46.322462       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 01:16:46.330498       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1004 01:16:47.093206       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 01:16:49.062694       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1004 01:16:49.266897       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1004 01:16:49.283175       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1004 01:16:49.377728       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 01:16:49.384894       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 01:16:58.872314       1 controller.go:624] quota admission added evaluator for: endpoints
	I1004 01:16:58.889385       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [5112089941d0ff1428e10243b4aa0e9a3724db5e2b118a26f206e18ce661dcd1] <==
	* I1004 01:18:44.241726       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-038823-m02\" does not exist"
	I1004 01:18:44.241959       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m03"
	I1004 01:18:44.243118       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-8g74z" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-8g74z"
	I1004 01:18:44.258648       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-038823-m02" podCIDRs=["10.244.1.0/24"]
	I1004 01:18:44.291683       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m02"
	I1004 01:18:45.144851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="118.241µs"
	I1004 01:18:48.860414       1 event.go:307] "Event occurred" object="multinode-038823-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-038823-m02 event: Registered Node multinode-038823-m02 in Controller"
	I1004 01:18:58.405269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="226.928µs"
	I1004 01:18:58.992702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="130.234µs"
	I1004 01:18:59.001198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="58.385µs"
	I1004 01:19:22.168124       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m02"
	I1004 01:20:21.695290       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-hln8h"
	I1004 01:20:21.703194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="21.471807ms"
	I1004 01:20:21.722867       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="19.596915ms"
	I1004 01:20:21.722969       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.98µs"
	I1004 01:20:21.740923       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="44.254µs"
	I1004 01:20:23.273427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.037799ms"
	I1004 01:20:23.273797       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.43µs"
	I1004 01:20:24.706547       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m02"
	I1004 01:20:25.382295       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m02"
	I1004 01:20:25.388934       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-038823-m03\" does not exist"
	I1004 01:20:25.389207       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-tkn7n" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-tkn7n"
	I1004 01:20:25.417000       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-038823-m03" podCIDRs=["10.244.2.0/24"]
	I1004 01:20:25.735811       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-038823-m02"
	I1004 01:20:26.325163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="99.256µs"
	
	* 
	* ==> kube-proxy [1a31de4acfc5f7c925b7e536c82936ed8c596b7f39a99da80dff5ee4cfc0f402] <==
	* I1004 01:16:48.139146       1 server_others.go:69] "Using iptables proxy"
	I1004 01:16:48.174590       1 node.go:141] Successfully retrieved node IP: 192.168.39.212
	I1004 01:16:48.492444       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:16:48.492495       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:16:48.495483       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:16:48.495524       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:16:48.495652       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:16:48.495660       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:16:48.496556       1 config.go:188] "Starting service config controller"
	I1004 01:16:48.496570       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:16:48.496586       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:16:48.496589       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:16:48.497166       1 config.go:315] "Starting node config controller"
	I1004 01:16:48.497173       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:16:48.597437       1 shared_informer.go:318] Caches are synced for node config
	I1004 01:16:48.597559       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:16:48.597599       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [a70d76cf7a06eebbc454472c13a3f50c14527900015e29b110809944e2b79e96] <==
	* I1004 01:16:44.102115       1 serving.go:348] Generated self-signed cert in-memory
	W1004 01:16:46.223333       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 01:16:46.223381       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:16:46.223392       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 01:16:46.223400       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 01:16:46.257188       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1004 01:16:46.257276       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:16:46.260672       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 01:16:46.261108       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 01:16:46.261263       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:16:46.263485       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 01:16:46.365826       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:16:14 UTC, ends at Wed 2023-10-04 01:20:30 UTC. --
	Oct 04 01:16:50 multinode-038823 kubelet[915]: E1004 01:16:50.562220     915 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 04 01:16:50 multinode-038823 kubelet[915]: E1004 01:16:50.562249     915 projected.go:198] Error preparing data for projected volume kube-api-access-9488z for pod default/busybox-5bc68d56bd-ckxb4: object "default"/"kube-root-ca.crt" not registered
	Oct 04 01:16:50 multinode-038823 kubelet[915]: E1004 01:16:50.562296     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a2cc02b-be6a-4874-be28-422aa6bcbd21-kube-api-access-9488z podName:0a2cc02b-be6a-4874-be28-422aa6bcbd21 nodeName:}" failed. No retries permitted until 2023-10-04 01:16:54.562283739 +0000 UTC m=+14.901545766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9488z" (UniqueName: "kubernetes.io/projected/0a2cc02b-be6a-4874-be28-422aa6bcbd21-kube-api-access-9488z") pod "busybox-5bc68d56bd-ckxb4" (UID: "0a2cc02b-be6a-4874-be28-422aa6bcbd21") : object "default"/"kube-root-ca.crt" not registered
	Oct 04 01:16:50 multinode-038823 kubelet[915]: E1004 01:16:50.934836     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-xbln6" podUID="956d98ac-25cb-4d19-a9c7-c3a9682eff67"
	Oct 04 01:16:51 multinode-038823 kubelet[915]: E1004 01:16:51.936365     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-ckxb4" podUID="0a2cc02b-be6a-4874-be28-422aa6bcbd21"
	Oct 04 01:16:52 multinode-038823 kubelet[915]: E1004 01:16:52.935627     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-xbln6" podUID="956d98ac-25cb-4d19-a9c7-c3a9682eff67"
	Oct 04 01:16:53 multinode-038823 kubelet[915]: E1004 01:16:53.936510     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-ckxb4" podUID="0a2cc02b-be6a-4874-be28-422aa6bcbd21"
	Oct 04 01:16:54 multinode-038823 kubelet[915]: E1004 01:16:54.497280     915 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 04 01:16:54 multinode-038823 kubelet[915]: E1004 01:16:54.497458     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/956d98ac-25cb-4d19-a9c7-c3a9682eff67-config-volume podName:956d98ac-25cb-4d19-a9c7-c3a9682eff67 nodeName:}" failed. No retries permitted until 2023-10-04 01:17:02.497434877 +0000 UTC m=+22.836696906 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/956d98ac-25cb-4d19-a9c7-c3a9682eff67-config-volume") pod "coredns-5dd5756b68-xbln6" (UID: "956d98ac-25cb-4d19-a9c7-c3a9682eff67") : object "kube-system"/"coredns" not registered
	Oct 04 01:16:54 multinode-038823 kubelet[915]: E1004 01:16:54.598427     915 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 04 01:16:54 multinode-038823 kubelet[915]: E1004 01:16:54.598488     915 projected.go:198] Error preparing data for projected volume kube-api-access-9488z for pod default/busybox-5bc68d56bd-ckxb4: object "default"/"kube-root-ca.crt" not registered
	Oct 04 01:16:54 multinode-038823 kubelet[915]: E1004 01:16:54.598568     915 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0a2cc02b-be6a-4874-be28-422aa6bcbd21-kube-api-access-9488z podName:0a2cc02b-be6a-4874-be28-422aa6bcbd21 nodeName:}" failed. No retries permitted until 2023-10-04 01:17:02.598552697 +0000 UTC m=+22.937814744 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9488z" (UniqueName: "kubernetes.io/projected/0a2cc02b-be6a-4874-be28-422aa6bcbd21-kube-api-access-9488z") pod "busybox-5bc68d56bd-ckxb4" (UID: "0a2cc02b-be6a-4874-be28-422aa6bcbd21") : object "default"/"kube-root-ca.crt" not registered
	Oct 04 01:16:54 multinode-038823 kubelet[915]: E1004 01:16:54.934853     915 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-xbln6" podUID="956d98ac-25cb-4d19-a9c7-c3a9682eff67"
	Oct 04 01:17:40 multinode-038823 kubelet[915]: E1004 01:17:40.067228     915 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 01:17:40 multinode-038823 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 01:17:40 multinode-038823 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 01:17:40 multinode-038823 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 01:18:40 multinode-038823 kubelet[915]: E1004 01:18:40.078332     915 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 01:18:40 multinode-038823 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 01:18:40 multinode-038823 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 01:18:40 multinode-038823 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 01:19:40 multinode-038823 kubelet[915]: E1004 01:19:40.064811     915 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 01:19:40 multinode-038823 kubelet[915]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 01:19:40 multinode-038823 kubelet[915]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 01:19:40 multinode-038823 kubelet[915]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-038823 -n multinode-038823
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-038823 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (688.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 stop
E1004 01:21:05.195180  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-038823 stop: exit status 82 (2m1.658656193s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-038823"  ...
	* Stopping node "multinode-038823"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-038823 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-038823 status: exit status 3 (18.715890273s)

                                                
                                                
-- stdout --
	multinode-038823
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-038823-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:22:53.834273  154040 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	E1004 01:22:53.834318  154040 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-038823 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-038823 -n multinode-038823
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-038823 -n multinode-038823: exit status 3 (3.1686906s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:22:57.162253  154136 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host
	E1004 01:22:57.162278  154136 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.212:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-038823" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.54s)

                                                
                                    
x
+
TestPreload (182.7s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-377961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-377961 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m37.289822064s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377961 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-377961 image pull gcr.io/k8s-minikube/busybox: (1.785569353s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-377961
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-377961: (7.084071836s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-377961 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1004 01:33:15.375157  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:33:36.337539  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-377961 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m13.317862446s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377961 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:523: *** TestPreload FAILED at 2023-10-04 01:34:13.72838536 +0000 UTC m=+3046.099416394
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-377961 -n test-preload-377961
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377961 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-377961 logs -n 25: (1.187950906s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823 sudo cat                                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m03_multinode-038823.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02:/home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m02 sudo cat                                   | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-038823 node stop m03                                                          | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	| node    | multinode-038823 node start                                                             | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:09 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:09 UTC |                     |
	| stop    | -p multinode-038823                                                                     | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:09 UTC |                     |
	| start   | -p multinode-038823                                                                     | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:11 UTC | 04 Oct 23 01:20 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC |                     |
	| node    | multinode-038823 node delete                                                            | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC | 04 Oct 23 01:20 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-038823 stop                                                                   | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC |                     |
	| start   | -p multinode-038823                                                                     | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:22 UTC | 04 Oct 23 01:30 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:30 UTC |                     |
	| start   | -p multinode-038823-m02                                                                 | multinode-038823-m02 | jenkins | v1.31.2 | 04 Oct 23 01:30 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-038823-m03                                                                 | multinode-038823-m03 | jenkins | v1.31.2 | 04 Oct 23 01:30 UTC | 04 Oct 23 01:31 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-038823                                                                 | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC |                     |
	| delete  | -p multinode-038823-m03                                                                 | multinode-038823-m03 | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC | 04 Oct 23 01:31 UTC |
	| delete  | -p multinode-038823                                                                     | multinode-038823     | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC | 04 Oct 23 01:31 UTC |
	| start   | -p test-preload-377961                                                                  | test-preload-377961  | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC | 04 Oct 23 01:32 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-377961 image pull                                                          | test-preload-377961  | jenkins | v1.31.2 | 04 Oct 23 01:32 UTC | 04 Oct 23 01:32 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-377961                                                                  | test-preload-377961  | jenkins | v1.31.2 | 04 Oct 23 01:32 UTC | 04 Oct 23 01:33 UTC |
	| start   | -p test-preload-377961                                                                  | test-preload-377961  | jenkins | v1.31.2 | 04 Oct 23 01:33 UTC | 04 Oct 23 01:34 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-377961 image list                                                          | test-preload-377961  | jenkins | v1.31.2 | 04 Oct 23 01:34 UTC | 04 Oct 23 01:34 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:33:00
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:33:00.234464  156795 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:33:00.234737  156795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:33:00.234748  156795 out.go:309] Setting ErrFile to fd 2...
	I1004 01:33:00.234753  156795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:33:00.234988  156795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:33:00.235583  156795 out.go:303] Setting JSON to false
	I1004 01:33:00.236506  156795 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8132,"bootTime":1696375049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:33:00.236572  156795 start.go:138] virtualization: kvm guest
	I1004 01:33:00.238811  156795 out.go:177] * [test-preload-377961] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:33:00.240682  156795 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:33:00.242067  156795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:33:00.240694  156795 notify.go:220] Checking for updates...
	I1004 01:33:00.243561  156795 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:33:00.244906  156795 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:33:00.247253  156795 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:33:00.248550  156795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:33:00.250087  156795 config.go:182] Loaded profile config "test-preload-377961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1004 01:33:00.250480  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:33:00.250555  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:33:00.266030  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38039
	I1004 01:33:00.266458  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:33:00.266952  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:33:00.266976  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:33:00.267380  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:33:00.267574  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:00.269685  156795 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1004 01:33:00.271276  156795 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:33:00.271686  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:33:00.271720  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:33:00.286290  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34389
	I1004 01:33:00.286731  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:33:00.287163  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:33:00.287194  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:33:00.287503  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:33:00.287661  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:00.322120  156795 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:33:00.323509  156795 start.go:298] selected driver: kvm2
	I1004 01:33:00.323522  156795 start.go:902] validating driver "kvm2" against &{Name:test-preload-377961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-377961 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:33:00.323624  156795 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:33:00.324336  156795 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:33:00.324423  156795 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:33:00.340458  156795 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:33:00.340841  156795 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:33:00.340900  156795 cni.go:84] Creating CNI manager for ""
	I1004 01:33:00.340914  156795 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:33:00.340924  156795 start_flags.go:321] config:
	{Name:test-preload-377961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-377961 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:33:00.341230  156795 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:33:00.343351  156795 out.go:177] * Starting control plane node test-preload-377961 in cluster test-preload-377961
	I1004 01:33:00.344717  156795 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1004 01:33:00.373597  156795 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1004 01:33:00.373627  156795 cache.go:57] Caching tarball of preloaded images
	I1004 01:33:00.373765  156795 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1004 01:33:00.375814  156795 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1004 01:33:00.377202  156795 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1004 01:33:00.408929  156795 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1004 01:33:03.395818  156795 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1004 01:33:03.395914  156795 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1004 01:33:04.418954  156795 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I1004 01:33:04.419087  156795 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/config.json ...
	I1004 01:33:04.419313  156795 start.go:365] acquiring machines lock for test-preload-377961: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:33:04.419382  156795 start.go:369] acquired machines lock for "test-preload-377961" in 46.34µs
	I1004 01:33:04.419398  156795 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:33:04.419403  156795 fix.go:54] fixHost starting: 
	I1004 01:33:04.419668  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:33:04.419705  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:33:04.434348  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41981
	I1004 01:33:04.434895  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:33:04.435408  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:33:04.435435  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:33:04.435804  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:33:04.436023  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:04.436208  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetState
	I1004 01:33:04.438006  156795 fix.go:102] recreateIfNeeded on test-preload-377961: state=Stopped err=<nil>
	I1004 01:33:04.438031  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	W1004 01:33:04.438247  156795 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:33:04.440725  156795 out.go:177] * Restarting existing kvm2 VM for "test-preload-377961" ...
	I1004 01:33:04.442416  156795 main.go:141] libmachine: (test-preload-377961) Calling .Start
	I1004 01:33:04.442634  156795 main.go:141] libmachine: (test-preload-377961) Ensuring networks are active...
	I1004 01:33:04.443388  156795 main.go:141] libmachine: (test-preload-377961) Ensuring network default is active
	I1004 01:33:04.443821  156795 main.go:141] libmachine: (test-preload-377961) Ensuring network mk-test-preload-377961 is active
	I1004 01:33:04.444222  156795 main.go:141] libmachine: (test-preload-377961) Getting domain xml...
	I1004 01:33:04.444932  156795 main.go:141] libmachine: (test-preload-377961) Creating domain...
	I1004 01:33:05.658987  156795 main.go:141] libmachine: (test-preload-377961) Waiting to get IP...
	I1004 01:33:05.659865  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:05.660211  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:05.660323  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:05.660187  156841 retry.go:31] will retry after 268.021427ms: waiting for machine to come up
	I1004 01:33:05.929686  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:05.930120  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:05.930151  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:05.930062  156841 retry.go:31] will retry after 305.293885ms: waiting for machine to come up
	I1004 01:33:06.236489  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:06.236822  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:06.236851  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:06.236777  156841 retry.go:31] will retry after 384.913183ms: waiting for machine to come up
	I1004 01:33:06.623322  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:06.623777  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:06.623805  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:06.623733  156841 retry.go:31] will retry after 544.800651ms: waiting for machine to come up
	I1004 01:33:07.170470  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:07.170865  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:07.170896  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:07.170800  156841 retry.go:31] will retry after 479.851516ms: waiting for machine to come up
	I1004 01:33:07.651965  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:07.652266  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:07.652293  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:07.652228  156841 retry.go:31] will retry after 818.340156ms: waiting for machine to come up
	I1004 01:33:08.472255  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:08.472752  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:08.472785  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:08.472703  156841 retry.go:31] will retry after 836.622244ms: waiting for machine to come up
	I1004 01:33:09.310613  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:09.311021  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:09.311050  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:09.310975  156841 retry.go:31] will retry after 1.316111395s: waiting for machine to come up
	I1004 01:33:10.628800  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:10.629160  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:10.629184  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:10.629128  156841 retry.go:31] will retry after 1.335160865s: waiting for machine to come up
	I1004 01:33:11.966734  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:11.967233  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:11.967266  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:11.967172  156841 retry.go:31] will retry after 2.114207154s: waiting for machine to come up
	I1004 01:33:14.082745  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:14.083151  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:14.083186  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:14.083084  156841 retry.go:31] will retry after 2.745235518s: waiting for machine to come up
	I1004 01:33:16.830941  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:16.831629  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:16.831661  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:16.831588  156841 retry.go:31] will retry after 2.234594093s: waiting for machine to come up
	I1004 01:33:19.068874  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:19.069170  156795 main.go:141] libmachine: (test-preload-377961) DBG | unable to find current IP address of domain test-preload-377961 in network mk-test-preload-377961
	I1004 01:33:19.069191  156795 main.go:141] libmachine: (test-preload-377961) DBG | I1004 01:33:19.069144  156841 retry.go:31] will retry after 3.844562809s: waiting for machine to come up
	I1004 01:33:22.917511  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:22.917982  156795 main.go:141] libmachine: (test-preload-377961) Found IP for machine: 192.168.39.28
	I1004 01:33:22.918017  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has current primary IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:22.918031  156795 main.go:141] libmachine: (test-preload-377961) Reserving static IP address...
	I1004 01:33:22.918406  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "test-preload-377961", mac: "52:54:00:df:c9:e5", ip: "192.168.39.28"} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:22.918434  156795 main.go:141] libmachine: (test-preload-377961) DBG | skip adding static IP to network mk-test-preload-377961 - found existing host DHCP lease matching {name: "test-preload-377961", mac: "52:54:00:df:c9:e5", ip: "192.168.39.28"}
	I1004 01:33:22.918452  156795 main.go:141] libmachine: (test-preload-377961) DBG | Getting to WaitForSSH function...
	I1004 01:33:22.918466  156795 main.go:141] libmachine: (test-preload-377961) Reserved static IP address: 192.168.39.28
	I1004 01:33:22.918478  156795 main.go:141] libmachine: (test-preload-377961) Waiting for SSH to be available...
	I1004 01:33:22.920409  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:22.920692  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:22.920731  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:22.920828  156795 main.go:141] libmachine: (test-preload-377961) DBG | Using SSH client type: external
	I1004 01:33:22.920856  156795 main.go:141] libmachine: (test-preload-377961) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa (-rw-------)
	I1004 01:33:22.920895  156795 main.go:141] libmachine: (test-preload-377961) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.28 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:33:22.920915  156795 main.go:141] libmachine: (test-preload-377961) DBG | About to run SSH command:
	I1004 01:33:22.920930  156795 main.go:141] libmachine: (test-preload-377961) DBG | exit 0
	I1004 01:33:23.005889  156795 main.go:141] libmachine: (test-preload-377961) DBG | SSH cmd err, output: <nil>: 
	I1004 01:33:23.006231  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetConfigRaw
	I1004 01:33:23.006853  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetIP
	I1004 01:33:23.009115  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.009448  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.009484  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.009685  156795 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/config.json ...
	I1004 01:33:23.009915  156795 machine.go:88] provisioning docker machine ...
	I1004 01:33:23.009933  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:23.010139  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetMachineName
	I1004 01:33:23.010368  156795 buildroot.go:166] provisioning hostname "test-preload-377961"
	I1004 01:33:23.010395  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetMachineName
	I1004 01:33:23.010577  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.012699  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.013093  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.013124  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.013249  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:23.013447  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.013665  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.013827  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:23.013987  156795 main.go:141] libmachine: Using SSH client type: native
	I1004 01:33:23.014320  156795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1004 01:33:23.014333  156795 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-377961 && echo "test-preload-377961" | sudo tee /etc/hostname
	I1004 01:33:23.133634  156795 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-377961
	
	I1004 01:33:23.133670  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.136188  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.136508  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.136542  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.136707  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:23.136883  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.137036  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.137146  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:23.137309  156795 main.go:141] libmachine: Using SSH client type: native
	I1004 01:33:23.137622  156795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1004 01:33:23.137642  156795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-377961' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-377961/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-377961' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:33:23.256733  156795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:33:23.256770  156795 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:33:23.256827  156795 buildroot.go:174] setting up certificates
	I1004 01:33:23.256837  156795 provision.go:83] configureAuth start
	I1004 01:33:23.256851  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetMachineName
	I1004 01:33:23.257133  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetIP
	I1004 01:33:23.259533  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.259896  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.259929  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.260080  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.262395  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.262738  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.262762  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.262902  156795 provision.go:138] copyHostCerts
	I1004 01:33:23.262971  156795 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:33:23.262986  156795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:33:23.263085  156795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:33:23.263204  156795 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:33:23.263216  156795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:33:23.263264  156795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:33:23.263400  156795 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:33:23.263413  156795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:33:23.263455  156795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:33:23.263530  156795 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.test-preload-377961 san=[192.168.39.28 192.168.39.28 localhost 127.0.0.1 minikube test-preload-377961]
	I1004 01:33:23.350106  156795 provision.go:172] copyRemoteCerts
	I1004 01:33:23.350167  156795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:33:23.350199  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.352820  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.353130  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.353169  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.353303  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:23.353486  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.353659  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:23.353785  156795 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa Username:docker}
	I1004 01:33:23.441156  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:33:23.462881  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1004 01:33:23.483802  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 01:33:23.504877  156795 provision.go:86] duration metric: configureAuth took 248.021432ms
	I1004 01:33:23.504906  156795 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:33:23.505083  156795 config.go:182] Loaded profile config "test-preload-377961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1004 01:33:23.505159  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.508012  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.508353  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.508393  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.508573  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:23.508777  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.508940  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.509105  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:23.509259  156795 main.go:141] libmachine: Using SSH client type: native
	I1004 01:33:23.509556  156795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1004 01:33:23.509572  156795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:33:23.822509  156795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:33:23.822536  156795 machine.go:91] provisioned docker machine in 812.607709ms
	I1004 01:33:23.822545  156795 start.go:300] post-start starting for "test-preload-377961" (driver="kvm2")
	I1004 01:33:23.822554  156795 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:33:23.822569  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:23.822913  156795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:33:23.822950  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.825573  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.826033  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.826068  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.826214  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:23.826401  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.826602  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:23.826751  156795 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa Username:docker}
	I1004 01:33:23.911520  156795 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:33:23.915952  156795 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:33:23.915979  156795 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:33:23.916053  156795 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:33:23.916163  156795 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:33:23.916278  156795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:33:23.924555  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:33:23.947604  156795 start.go:303] post-start completed in 125.042342ms
	I1004 01:33:23.947634  156795 fix.go:56] fixHost completed within 19.528229876s
	I1004 01:33:23.947656  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:23.950270  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.950629  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:23.950663  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:23.950857  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:23.951058  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.951212  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:23.951347  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:23.951495  156795 main.go:141] libmachine: Using SSH client type: native
	I1004 01:33:23.951831  156795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.39.28 22 <nil> <nil>}
	I1004 01:33:23.951846  156795 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:33:24.058675  156795 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696383204.008733865
	
	I1004 01:33:24.058700  156795 fix.go:206] guest clock: 1696383204.008733865
	I1004 01:33:24.058711  156795 fix.go:219] Guest: 2023-10-04 01:33:24.008733865 +0000 UTC Remote: 2023-10-04 01:33:23.947638127 +0000 UTC m=+23.744192836 (delta=61.095738ms)
	I1004 01:33:24.058736  156795 fix.go:190] guest clock delta is within tolerance: 61.095738ms
	I1004 01:33:24.058742  156795 start.go:83] releasing machines lock for "test-preload-377961", held for 19.639349316s
	I1004 01:33:24.058769  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:24.059096  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetIP
	I1004 01:33:24.061831  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:24.062133  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:24.062167  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:24.062329  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:24.062854  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:24.063093  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:33:24.063196  156795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:33:24.063236  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:24.063305  156795 ssh_runner.go:195] Run: cat /version.json
	I1004 01:33:24.063324  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:33:24.065730  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:24.065760  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:24.066068  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:24.066104  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:24.066133  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:24.066156  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:24.066234  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:24.066430  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:24.066438  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:33:24.066629  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:33:24.066658  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:24.066856  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:33:24.066851  156795 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa Username:docker}
	I1004 01:33:24.067024  156795 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa Username:docker}
	I1004 01:33:24.174621  156795 ssh_runner.go:195] Run: systemctl --version
	I1004 01:33:24.180332  156795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:33:24.322240  156795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 01:33:24.328573  156795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:33:24.328657  156795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:33:24.343711  156795 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:33:24.343738  156795 start.go:469] detecting cgroup driver to use...
	I1004 01:33:24.343812  156795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:33:24.360217  156795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:33:24.372182  156795 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:33:24.372253  156795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:33:24.384579  156795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:33:24.397131  156795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:33:24.499763  156795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:33:24.615286  156795 docker.go:213] disabling docker service ...
	I1004 01:33:24.615372  156795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:33:24.628446  156795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:33:24.640257  156795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:33:24.755379  156795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:33:24.869555  156795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:33:24.882378  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:33:24.899230  156795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1004 01:33:24.899303  156795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:33:24.908192  156795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:33:24.908263  156795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:33:24.917202  156795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:33:24.926074  156795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:33:24.934749  156795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:33:24.945359  156795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:33:24.953132  156795 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:33:24.953195  156795 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:33:24.966231  156795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:33:24.974414  156795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:33:25.075313  156795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:33:25.252093  156795 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:33:25.252160  156795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:33:25.257020  156795 start.go:537] Will wait 60s for crictl version
	I1004 01:33:25.257071  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:25.261427  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:33:25.297051  156795 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:33:25.297123  156795 ssh_runner.go:195] Run: crio --version
	I1004 01:33:25.340866  156795 ssh_runner.go:195] Run: crio --version
	I1004 01:33:25.392354  156795 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I1004 01:33:25.393741  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetIP
	I1004 01:33:25.396379  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:25.396711  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:33:25.396748  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:33:25.396972  156795 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1004 01:33:25.401039  156795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:33:25.412648  156795 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1004 01:33:25.412734  156795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:33:25.452609  156795 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1004 01:33:25.452685  156795 ssh_runner.go:195] Run: which lz4
	I1004 01:33:25.456411  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:33:25.460200  156795 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:33:25.460227  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1004 01:33:27.364140  156795 crio.go:444] Took 1.907760 seconds to copy over tarball
	I1004 01:33:27.364208  156795 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:33:30.401544  156795 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.037309044s)
	I1004 01:33:30.401577  156795 crio.go:451] Took 3.037410 seconds to extract the tarball
	I1004 01:33:30.401588  156795 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:33:30.442529  156795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:33:30.493021  156795 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1004 01:33:30.493047  156795 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1004 01:33:30.493168  156795 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1004 01:33:30.493195  156795 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1004 01:33:30.493208  156795 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1004 01:33:30.493232  156795 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1004 01:33:30.493214  156795 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1004 01:33:30.493196  156795 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1004 01:33:30.493163  156795 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1004 01:33:30.493109  156795 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:33:30.494593  156795 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1004 01:33:30.494603  156795 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:33:30.494611  156795 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1004 01:33:30.494613  156795 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1004 01:33:30.494597  156795 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1004 01:33:30.494604  156795 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1004 01:33:30.494644  156795 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1004 01:33:30.494595  156795 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1004 01:33:30.645010  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1004 01:33:30.645583  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1004 01:33:30.646568  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1004 01:33:30.654169  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1004 01:33:30.659185  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1004 01:33:30.666955  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1004 01:33:30.669390  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1004 01:33:30.735270  156795 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1004 01:33:30.735327  156795 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1004 01:33:30.735379  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.800879  156795 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1004 01:33:30.800932  156795 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1004 01:33:30.800981  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.827213  156795 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1004 01:33:30.827265  156795 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1004 01:33:30.827316  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.843350  156795 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1004 01:33:30.843392  156795 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1004 01:33:30.843443  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.843469  156795 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1004 01:33:30.843517  156795 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1004 01:33:30.843522  156795 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1004 01:33:30.843552  156795 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1004 01:33:30.843570  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.843622  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.843622  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1004 01:33:30.843678  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1004 01:33:30.843734  156795 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1004 01:33:30.843763  156795 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1004 01:33:30.843767  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1004 01:33:30.843795  156795 ssh_runner.go:195] Run: which crictl
	I1004 01:33:30.847884  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1004 01:33:30.954877  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1004 01:33:30.954960  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1004 01:33:30.955026  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1004 01:33:30.955095  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1004 01:33:30.955156  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1004 01:33:30.955171  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I1004 01:33:30.955187  156795 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1004 01:33:30.955602  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1004 01:33:30.955686  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1004 01:33:30.959175  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1004 01:33:30.959256  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1004 01:33:31.025592  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1004 01:33:31.025620  156795 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1004 01:33:31.025652  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1004 01:33:31.025673  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1004 01:33:31.025755  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I1004 01:33:31.047944  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1004 01:33:31.048052  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I1004 01:33:31.048075  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1004 01:33:31.048074  156795 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1004 01:33:31.048140  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1004 01:33:31.048157  156795 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1004 01:33:31.048173  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1004 01:33:31.084045  156795 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:33:34.201334  156795 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.175633672s)
	I1004 01:33:34.201371  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1004 01:33:34.201403  156795 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I1004 01:33:34.201423  156795 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: (3.175639565s)
	I1004 01:33:34.201459  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1004 01:33:34.201529  156795 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (3.153456404s)
	I1004 01:33:34.201556  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1004 01:33:34.201459  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1004 01:33:34.201599  156795 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4: (3.153431589s)
	I1004 01:33:34.201614  156795 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1004 01:33:34.201674  156795 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.117597567s)
	I1004 01:33:34.341914  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1004 01:33:34.341963  156795 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1004 01:33:34.342020  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1004 01:33:35.091544  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1004 01:33:35.091594  156795 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1004 01:33:35.091653  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1004 01:33:35.935736  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1004 01:33:35.935782  156795 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1004 01:33:35.935839  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1004 01:33:38.188538  156795 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.252673184s)
	I1004 01:33:38.188568  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1004 01:33:38.188599  156795 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1004 01:33:38.188649  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1004 01:33:38.631349  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1004 01:33:38.631406  156795 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1004 01:33:38.631495  156795 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1004 01:33:39.379458  156795 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17348-128338/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1004 01:33:39.379529  156795 cache_images.go:123] Successfully loaded all cached images
	I1004 01:33:39.379537  156795 cache_images.go:92] LoadImages completed in 8.886476439s
	I1004 01:33:39.379632  156795 ssh_runner.go:195] Run: crio config
	I1004 01:33:39.437040  156795 cni.go:84] Creating CNI manager for ""
	I1004 01:33:39.437090  156795 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:33:39.437121  156795 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:33:39.437148  156795 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.28 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-377961 NodeName:test-preload-377961 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.28"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.28 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:33:39.437302  156795 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.28
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-377961"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.28
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.28"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:33:39.437385  156795 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-377961 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.28
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-377961 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:33:39.437451  156795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1004 01:33:39.446345  156795 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:33:39.446439  156795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:33:39.454815  156795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1004 01:33:39.471806  156795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:33:39.489215  156795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I1004 01:33:39.506091  156795 ssh_runner.go:195] Run: grep 192.168.39.28	control-plane.minikube.internal$ /etc/hosts
	I1004 01:33:39.509760  156795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.28	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:33:39.522314  156795 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961 for IP: 192.168.39.28
	I1004 01:33:39.522358  156795 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:33:39.522552  156795 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:33:39.522609  156795 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:33:39.522697  156795 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.key
	I1004 01:33:39.522771  156795 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/apiserver.key.aac01ba2
	I1004 01:33:39.522830  156795 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/proxy-client.key
	I1004 01:33:39.522988  156795 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:33:39.523026  156795 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:33:39.523040  156795 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:33:39.523067  156795 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:33:39.523089  156795 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:33:39.523122  156795 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:33:39.523164  156795 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:33:39.523753  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:33:39.547582  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:33:39.571730  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:33:39.595191  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 01:33:39.619915  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:33:39.643170  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:33:39.666379  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:33:39.689540  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:33:39.712593  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:33:39.735243  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:33:39.757917  156795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:33:39.780491  156795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:33:39.796431  156795 ssh_runner.go:195] Run: openssl version
	I1004 01:33:39.802250  156795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:33:39.811447  156795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:33:39.816189  156795 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:33:39.816250  156795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:33:39.821612  156795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:33:39.830777  156795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:33:39.840257  156795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:33:39.844798  156795 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:33:39.844843  156795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:33:39.850463  156795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:33:39.859490  156795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:33:39.868571  156795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:33:39.873210  156795 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:33:39.873262  156795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:33:39.878737  156795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:33:39.887974  156795 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:33:39.892422  156795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:33:39.898583  156795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:33:39.904168  156795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:33:39.909634  156795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:33:39.915221  156795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:33:39.920632  156795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:33:39.926421  156795 kubeadm.go:404] StartCluster: {Name:test-preload-377961 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-377961 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:33:39.926503  156795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:33:39.926551  156795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:33:39.962119  156795 cri.go:89] found id: ""
	I1004 01:33:39.962197  156795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:33:39.971286  156795 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1004 01:33:39.971307  156795 kubeadm.go:636] restartCluster start
	I1004 01:33:39.971367  156795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 01:33:39.979709  156795 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:39.980101  156795 kubeconfig.go:135] verify returned: extract IP: "test-preload-377961" does not appear in /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:33:39.980178  156795 kubeconfig.go:146] "test-preload-377961" context is missing from /home/jenkins/minikube-integration/17348-128338/kubeconfig - will repair!
	I1004 01:33:39.980433  156795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:33:39.981029  156795 kapi.go:59] client config for test-preload-377961: &rest.Config{Host:"https://192.168.39.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:33:39.981773  156795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 01:33:39.989880  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:39.989943  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:40.001900  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:40.001915  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:40.001955  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:40.012229  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:40.512356  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:40.512428  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:40.523730  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:41.013336  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:41.013426  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:41.024798  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:41.512323  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:41.512404  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:41.524975  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:42.012558  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:42.012675  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:42.023949  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:42.512422  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:42.512511  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:42.523955  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:43.012524  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:43.012640  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:43.024127  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:43.512629  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:43.512744  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:43.523923  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:44.012489  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:44.012584  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:44.023770  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:44.512965  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:44.513049  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:44.524294  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:45.012806  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:45.012892  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:45.024334  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:45.513235  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:45.513320  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:45.524902  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:46.012520  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:46.012653  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:46.024044  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:46.512577  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:46.512664  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:46.524361  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:47.012985  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:47.013091  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:47.024575  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:47.513222  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:47.513316  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:47.525999  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:48.012527  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:48.012655  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:48.025578  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:48.513157  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:48.513247  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:48.524631  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:49.013301  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:49.013393  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:49.025346  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:49.512349  156795 api_server.go:166] Checking apiserver status ...
	I1004 01:33:49.512478  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:33:49.524630  156795 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:33:49.990146  156795 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1004 01:33:49.990202  156795 kubeadm.go:1128] stopping kube-system containers ...
	I1004 01:33:49.990216  156795 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 01:33:49.990309  156795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:33:50.030690  156795 cri.go:89] found id: ""
	I1004 01:33:50.030787  156795 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 01:33:50.046148  156795 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:33:50.055975  156795 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:33:50.056043  156795 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:33:50.065328  156795 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1004 01:33:50.065358  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:33:50.164970  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:33:50.696858  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:33:51.046154  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:33:51.107337  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:33:51.189972  156795 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:33:51.190065  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:51.206347  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:51.740695  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:52.240334  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:52.740329  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:53.240239  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:53.740180  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:33:53.757238  156795 api_server.go:72] duration metric: took 2.567266671s to wait for apiserver process to appear ...
	I1004 01:33:53.757264  156795 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:33:53.757282  156795 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1004 01:33:57.603222  156795 api_server.go:279] https://192.168.39.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:33:57.603254  156795 api_server.go:103] status: https://192.168.39.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:33:57.603264  156795 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1004 01:33:57.653578  156795 api_server.go:279] https://192.168.39.28:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:33:57.653607  156795 api_server.go:103] status: https://192.168.39.28:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:33:58.154301  156795 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1004 01:33:58.163755  156795 api_server.go:279] https://192.168.39.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 01:33:58.163832  156795 api_server.go:103] status: https://192.168.39.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 01:33:58.654415  156795 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1004 01:33:58.660230  156795 api_server.go:279] https://192.168.39.28:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1004 01:33:58.660261  156795 api_server.go:103] status: https://192.168.39.28:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1004 01:33:59.153875  156795 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1004 01:33:59.161015  156795 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
	ok
	I1004 01:33:59.168252  156795 api_server.go:141] control plane version: v1.24.4
	I1004 01:33:59.168286  156795 api_server.go:131] duration metric: took 5.411014648s to wait for apiserver health ...
	I1004 01:33:59.168295  156795 cni.go:84] Creating CNI manager for ""
	I1004 01:33:59.168302  156795 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:33:59.170257  156795 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:33:59.171769  156795 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:33:59.193120  156795 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:33:59.227951  156795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:33:59.240559  156795 system_pods.go:59] 8 kube-system pods found
	I1004 01:33:59.240600  156795 system_pods.go:61] "coredns-6d4b75cb6d-c2dhr" [4e05f356-f5cf-44b8-a421-f3b714ea1a5f] Running
	I1004 01:33:59.240609  156795 system_pods.go:61] "coredns-6d4b75cb6d-xlst8" [489b4920-2bbf-4ba8-bc07-37274d8b480c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 01:33:59.240616  156795 system_pods.go:61] "etcd-test-preload-377961" [0d4a563d-3177-40ce-a554-beb25ed55cdd] Running
	I1004 01:33:59.240622  156795 system_pods.go:61] "kube-apiserver-test-preload-377961" [215bd539-52ae-4713-8582-46ed1b9eb7d6] Running
	I1004 01:33:59.240635  156795 system_pods.go:61] "kube-controller-manager-test-preload-377961" [944c2583-08ed-4e80-a69f-0e80397a1dff] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 01:33:59.240649  156795 system_pods.go:61] "kube-proxy-xcrw4" [1d1b714f-7d2a-40fe-8efa-6624a36f90be] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 01:33:59.240657  156795 system_pods.go:61] "kube-scheduler-test-preload-377961" [25c53970-824b-4158-9db8-4195bb838309] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 01:33:59.240667  156795 system_pods.go:61] "storage-provisioner" [279a16db-b190-4752-aab9-b4cb9b8a2bfc] Running
	I1004 01:33:59.240679  156795 system_pods.go:74] duration metric: took 12.706314ms to wait for pod list to return data ...
	I1004 01:33:59.240692  156795 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:33:59.244496  156795 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:33:59.244522  156795 node_conditions.go:123] node cpu capacity is 2
	I1004 01:33:59.244533  156795 node_conditions.go:105] duration metric: took 3.83315ms to run NodePressure ...
	I1004 01:33:59.244554  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:33:59.480312  156795 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1004 01:33:59.485930  156795 kubeadm.go:787] kubelet initialised
	I1004 01:33:59.485957  156795 kubeadm.go:788] duration metric: took 5.614176ms waiting for restarted kubelet to initialise ...
	I1004 01:33:59.485967  156795 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:33:59.492331  156795 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace to be "Ready" ...
	I1004 01:33:59.498018  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.498047  156795 pod_ready.go:81] duration metric: took 5.686279ms waiting for pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace to be "Ready" ...
	E1004 01:33:59.498059  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.498072  156795 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-xlst8" in "kube-system" namespace to be "Ready" ...
	I1004 01:33:59.504812  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "coredns-6d4b75cb6d-xlst8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.504840  156795 pod_ready.go:81] duration metric: took 6.756324ms waiting for pod "coredns-6d4b75cb6d-xlst8" in "kube-system" namespace to be "Ready" ...
	E1004 01:33:59.504852  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "coredns-6d4b75cb6d-xlst8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.504862  156795 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:33:59.510276  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "etcd-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.510307  156795 pod_ready.go:81] duration metric: took 5.429515ms waiting for pod "etcd-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	E1004 01:33:59.510319  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "etcd-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.510329  156795 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:33:59.631865  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "kube-apiserver-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.631902  156795 pod_ready.go:81] duration metric: took 121.562972ms waiting for pod "kube-apiserver-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	E1004 01:33:59.631916  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "kube-apiserver-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:33:59.631928  156795 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:00.031753  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:00.031787  156795 pod_ready.go:81] duration metric: took 399.844931ms waiting for pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	E1004 01:34:00.031801  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:00.031812  156795 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xcrw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:00.432247  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "kube-proxy-xcrw4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:00.432276  156795 pod_ready.go:81] duration metric: took 400.456322ms waiting for pod "kube-proxy-xcrw4" in "kube-system" namespace to be "Ready" ...
	E1004 01:34:00.432285  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "kube-proxy-xcrw4" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:00.432290  156795 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:00.832164  156795 pod_ready.go:97] node "test-preload-377961" hosting pod "kube-scheduler-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:00.832192  156795 pod_ready.go:81] duration metric: took 399.895591ms waiting for pod "kube-scheduler-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	E1004 01:34:00.832202  156795 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-377961" hosting pod "kube-scheduler-test-preload-377961" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:00.832212  156795 pod_ready.go:38] duration metric: took 1.346225481s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:34:00.832230  156795 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:34:00.844027  156795 ops.go:34] apiserver oom_adj: -16
	I1004 01:34:00.844051  156795 kubeadm.go:640] restartCluster took 20.872738119s
	I1004 01:34:00.844060  156795 kubeadm.go:406] StartCluster complete in 20.917646138s
	I1004 01:34:00.844078  156795 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:34:00.844170  156795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:34:00.844801  156795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:34:00.845031  156795 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:34:00.845199  156795 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:34:00.845264  156795 config.go:182] Loaded profile config "test-preload-377961": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1004 01:34:00.845288  156795 addons.go:69] Setting storage-provisioner=true in profile "test-preload-377961"
	I1004 01:34:00.845321  156795 addons.go:231] Setting addon storage-provisioner=true in "test-preload-377961"
	I1004 01:34:00.845331  156795 addons.go:69] Setting default-storageclass=true in profile "test-preload-377961"
	W1004 01:34:00.845336  156795 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:34:00.845352  156795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-377961"
	I1004 01:34:00.845393  156795 host.go:66] Checking if "test-preload-377961" exists ...
	I1004 01:34:00.845732  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:34:00.845775  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:34:00.845797  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:34:00.845858  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:34:00.845803  156795 kapi.go:59] client config for test-preload-377961: &rest.Config{Host:"https://192.168.39.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:34:00.849288  156795 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-377961" context rescaled to 1 replicas
	I1004 01:34:00.849325  156795 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.28 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:34:00.851541  156795 out.go:177] * Verifying Kubernetes components...
	I1004 01:34:00.853199  156795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:34:00.866041  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I1004 01:34:00.866182  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35049
	I1004 01:34:00.866540  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:34:00.866662  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:34:00.867107  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:34:00.867135  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:34:00.867349  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:34:00.867371  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:34:00.867464  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:34:00.867645  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetState
	I1004 01:34:00.867701  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:34:00.868202  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:34:00.868240  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:34:00.870280  156795 kapi.go:59] client config for test-preload-377961: &rest.Config{Host:"https://192.168.39.28:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.crt", KeyFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/profiles/test-preload-377961/client.key", CAFile:"/home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf83c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1004 01:34:00.870636  156795 addons.go:231] Setting addon default-storageclass=true in "test-preload-377961"
	W1004 01:34:00.870656  156795 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:34:00.870691  156795 host.go:66] Checking if "test-preload-377961" exists ...
	I1004 01:34:00.871138  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:34:00.871188  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:34:00.885308  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I1004 01:34:00.885653  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I1004 01:34:00.885832  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:34:00.885997  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:34:00.886343  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:34:00.886365  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:34:00.886592  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:34:00.886619  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:34:00.886793  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:34:00.887000  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:34:00.887177  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetState
	I1004 01:34:00.887398  156795 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:34:00.887445  156795 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:34:00.889007  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:34:00.891323  156795 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:34:00.892946  156795 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:34:00.892971  156795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:34:00.892991  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:34:00.896052  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:34:00.896444  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:34:00.896487  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:34:00.896725  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:34:00.896945  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:34:00.897127  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:34:00.897305  156795 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa Username:docker}
	I1004 01:34:00.904683  156795 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40425
	I1004 01:34:00.905187  156795 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:34:00.905692  156795 main.go:141] libmachine: Using API Version  1
	I1004 01:34:00.905725  156795 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:34:00.906155  156795 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:34:00.906382  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetState
	I1004 01:34:00.908123  156795 main.go:141] libmachine: (test-preload-377961) Calling .DriverName
	I1004 01:34:00.908412  156795 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:34:00.908431  156795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:34:00.908450  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHHostname
	I1004 01:34:00.911781  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:34:00.912443  156795 main.go:141] libmachine: (test-preload-377961) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:df:c9:e5", ip: ""} in network mk-test-preload-377961: {Iface:virbr1 ExpiryTime:2023-10-04 02:33:16 +0000 UTC Type:0 Mac:52:54:00:df:c9:e5 Iaid: IPaddr:192.168.39.28 Prefix:24 Hostname:test-preload-377961 Clientid:01:52:54:00:df:c9:e5}
	I1004 01:34:00.912480  156795 main.go:141] libmachine: (test-preload-377961) DBG | domain test-preload-377961 has defined IP address 192.168.39.28 and MAC address 52:54:00:df:c9:e5 in network mk-test-preload-377961
	I1004 01:34:00.912549  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHPort
	I1004 01:34:00.912750  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHKeyPath
	I1004 01:34:00.912957  156795 main.go:141] libmachine: (test-preload-377961) Calling .GetSSHUsername
	I1004 01:34:00.913217  156795 sshutil.go:53] new ssh client: &{IP:192.168.39.28 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/test-preload-377961/id_rsa Username:docker}
	I1004 01:34:01.012206  156795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:34:01.063445  156795 start.go:896] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1004 01:34:01.063539  156795 node_ready.go:35] waiting up to 6m0s for node "test-preload-377961" to be "Ready" ...
	I1004 01:34:01.079784  156795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:34:01.993612  156795 main.go:141] libmachine: Making call to close driver server
	I1004 01:34:01.993636  156795 main.go:141] libmachine: (test-preload-377961) Calling .Close
	I1004 01:34:01.993954  156795 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:34:01.993985  156795 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:34:01.993999  156795 main.go:141] libmachine: Making call to close driver server
	I1004 01:34:01.994010  156795 main.go:141] libmachine: (test-preload-377961) Calling .Close
	I1004 01:34:01.994264  156795 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:34:01.994277  156795 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:34:01.998359  156795 main.go:141] libmachine: Making call to close driver server
	I1004 01:34:01.998385  156795 main.go:141] libmachine: (test-preload-377961) Calling .Close
	I1004 01:34:01.998636  156795 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:34:01.998655  156795 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:34:01.998663  156795 main.go:141] libmachine: Making call to close driver server
	I1004 01:34:01.998669  156795 main.go:141] libmachine: (test-preload-377961) DBG | Closing plugin on server side
	I1004 01:34:01.998672  156795 main.go:141] libmachine: (test-preload-377961) Calling .Close
	I1004 01:34:01.998898  156795 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:34:01.998922  156795 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:34:01.998930  156795 main.go:141] libmachine: (test-preload-377961) DBG | Closing plugin on server side
	I1004 01:34:02.008876  156795 main.go:141] libmachine: Making call to close driver server
	I1004 01:34:02.008902  156795 main.go:141] libmachine: (test-preload-377961) Calling .Close
	I1004 01:34:02.009162  156795 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:34:02.009199  156795 main.go:141] libmachine: (test-preload-377961) DBG | Closing plugin on server side
	I1004 01:34:02.009214  156795 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:34:02.011372  156795 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1004 01:34:02.013000  156795 addons.go:502] enable addons completed in 1.16780298s: enabled=[storage-provisioner default-storageclass]
	I1004 01:34:03.236146  156795 node_ready.go:58] node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:05.236951  156795 node_ready.go:58] node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:07.735491  156795 node_ready.go:58] node "test-preload-377961" has status "Ready":"False"
	I1004 01:34:08.237371  156795 node_ready.go:49] node "test-preload-377961" has status "Ready":"True"
	I1004 01:34:08.237397  156795 node_ready.go:38] duration metric: took 7.173828756s waiting for node "test-preload-377961" to be "Ready" ...
	I1004 01:34:08.237407  156795 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:34:08.244581  156795 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:08.251134  156795 pod_ready.go:92] pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace has status "Ready":"True"
	I1004 01:34:08.251166  156795 pod_ready.go:81] duration metric: took 6.558609ms waiting for pod "coredns-6d4b75cb6d-c2dhr" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:08.251184  156795 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:08.769440  156795 pod_ready.go:92] pod "etcd-test-preload-377961" in "kube-system" namespace has status "Ready":"True"
	I1004 01:34:08.769470  156795 pod_ready.go:81] duration metric: took 518.277881ms waiting for pod "etcd-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:08.769482  156795 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:08.773926  156795 pod_ready.go:92] pod "kube-apiserver-test-preload-377961" in "kube-system" namespace has status "Ready":"True"
	I1004 01:34:08.773945  156795 pod_ready.go:81] duration metric: took 4.455757ms waiting for pod "kube-apiserver-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:08.773956  156795 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:10.942432  156795 pod_ready.go:102] pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace has status "Ready":"False"
	I1004 01:34:12.446161  156795 pod_ready.go:92] pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace has status "Ready":"True"
	I1004 01:34:12.446182  156795 pod_ready.go:81] duration metric: took 3.672218996s waiting for pod "kube-controller-manager-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:12.446191  156795 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xcrw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:12.451180  156795 pod_ready.go:92] pod "kube-proxy-xcrw4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:34:12.451201  156795 pod_ready.go:81] duration metric: took 5.003021ms waiting for pod "kube-proxy-xcrw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:12.451210  156795 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:12.637345  156795 pod_ready.go:92] pod "kube-scheduler-test-preload-377961" in "kube-system" namespace has status "Ready":"True"
	I1004 01:34:12.637378  156795 pod_ready.go:81] duration metric: took 186.158918ms waiting for pod "kube-scheduler-test-preload-377961" in "kube-system" namespace to be "Ready" ...
	I1004 01:34:12.637393  156795 pod_ready.go:38] duration metric: took 4.399976635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:34:12.637416  156795 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:34:12.637474  156795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:34:12.650984  156795 api_server.go:72] duration metric: took 11.801617709s to wait for apiserver process to appear ...
	I1004 01:34:12.651014  156795 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:34:12.651033  156795 api_server.go:253] Checking apiserver healthz at https://192.168.39.28:8443/healthz ...
	I1004 01:34:12.657864  156795 api_server.go:279] https://192.168.39.28:8443/healthz returned 200:
	ok
	I1004 01:34:12.659263  156795 api_server.go:141] control plane version: v1.24.4
	I1004 01:34:12.659281  156795 api_server.go:131] duration metric: took 8.259895ms to wait for apiserver health ...
	I1004 01:34:12.659289  156795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:34:12.839723  156795 system_pods.go:59] 7 kube-system pods found
	I1004 01:34:12.839749  156795 system_pods.go:61] "coredns-6d4b75cb6d-c2dhr" [4e05f356-f5cf-44b8-a421-f3b714ea1a5f] Running
	I1004 01:34:12.839754  156795 system_pods.go:61] "etcd-test-preload-377961" [0d4a563d-3177-40ce-a554-beb25ed55cdd] Running
	I1004 01:34:12.839758  156795 system_pods.go:61] "kube-apiserver-test-preload-377961" [215bd539-52ae-4713-8582-46ed1b9eb7d6] Running
	I1004 01:34:12.839762  156795 system_pods.go:61] "kube-controller-manager-test-preload-377961" [944c2583-08ed-4e80-a69f-0e80397a1dff] Running
	I1004 01:34:12.839766  156795 system_pods.go:61] "kube-proxy-xcrw4" [1d1b714f-7d2a-40fe-8efa-6624a36f90be] Running
	I1004 01:34:12.839770  156795 system_pods.go:61] "kube-scheduler-test-preload-377961" [25c53970-824b-4158-9db8-4195bb838309] Running
	I1004 01:34:12.839773  156795 system_pods.go:61] "storage-provisioner" [279a16db-b190-4752-aab9-b4cb9b8a2bfc] Running
	I1004 01:34:12.839779  156795 system_pods.go:74] duration metric: took 180.484139ms to wait for pod list to return data ...
	I1004 01:34:12.839787  156795 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:34:13.038575  156795 default_sa.go:45] found service account: "default"
	I1004 01:34:13.038601  156795 default_sa.go:55] duration metric: took 198.807703ms for default service account to be created ...
	I1004 01:34:13.038610  156795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:34:13.246862  156795 system_pods.go:86] 7 kube-system pods found
	I1004 01:34:13.246890  156795 system_pods.go:89] "coredns-6d4b75cb6d-c2dhr" [4e05f356-f5cf-44b8-a421-f3b714ea1a5f] Running
	I1004 01:34:13.246895  156795 system_pods.go:89] "etcd-test-preload-377961" [0d4a563d-3177-40ce-a554-beb25ed55cdd] Running
	I1004 01:34:13.246900  156795 system_pods.go:89] "kube-apiserver-test-preload-377961" [215bd539-52ae-4713-8582-46ed1b9eb7d6] Running
	I1004 01:34:13.246904  156795 system_pods.go:89] "kube-controller-manager-test-preload-377961" [944c2583-08ed-4e80-a69f-0e80397a1dff] Running
	I1004 01:34:13.246908  156795 system_pods.go:89] "kube-proxy-xcrw4" [1d1b714f-7d2a-40fe-8efa-6624a36f90be] Running
	I1004 01:34:13.246912  156795 system_pods.go:89] "kube-scheduler-test-preload-377961" [25c53970-824b-4158-9db8-4195bb838309] Running
	I1004 01:34:13.246918  156795 system_pods.go:89] "storage-provisioner" [279a16db-b190-4752-aab9-b4cb9b8a2bfc] Running
	I1004 01:34:13.246925  156795 system_pods.go:126] duration metric: took 208.31013ms to wait for k8s-apps to be running ...
	I1004 01:34:13.246933  156795 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:34:13.246992  156795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:34:13.259709  156795 system_svc.go:56] duration metric: took 12.764752ms WaitForService to wait for kubelet.
	I1004 01:34:13.259735  156795 kubeadm.go:581] duration metric: took 12.410380831s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:34:13.259754  156795 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:34:13.436546  156795 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:34:13.436574  156795 node_conditions.go:123] node cpu capacity is 2
	I1004 01:34:13.436586  156795 node_conditions.go:105] duration metric: took 176.823994ms to run NodePressure ...
	I1004 01:34:13.436598  156795 start.go:228] waiting for startup goroutines ...
	I1004 01:34:13.436604  156795 start.go:233] waiting for cluster config update ...
	I1004 01:34:13.436612  156795 start.go:242] writing updated cluster config ...
	I1004 01:34:13.436857  156795 ssh_runner.go:195] Run: rm -f paused
	I1004 01:34:13.484002  156795 start.go:600] kubectl: 1.28.2, cluster: 1.24.4 (minor skew: 4)
	I1004 01:34:13.486034  156795 out.go:177] 
	W1004 01:34:13.487384  156795 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.24.4.
	I1004 01:34:13.488573  156795 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1004 01:34:13.489837  156795 out.go:177] * Done! kubectl is now configured to use "test-preload-377961" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:33:16 UTC, ends at Wed 2023-10-04 01:34:14 UTC. --
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.388842237Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0e6691e8404424aca449e4d901ba231a7a3e1ddde2d57a0a449fd71a354ca777,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-c2dhr,Uid:4e05f356-f5cf-44b8-a421-f3b714ea1a5f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383246205064261,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2dhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e05f356-f5cf-44b8-a421-f3b714ea1a5f,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:33:58.150139145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:279a16db-b190-4752-aab9-b4cb9b8a2bfc,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383239100471343,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 279a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-04T01:33:58.150138010Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2a59852bb82946883ef5c270fd15da26228bcd60e46730c0317371d074409215,Metadata:&PodSandboxMetadata{Name:kube-proxy-xcrw4,Uid:1d1b714f-7d2a-40fe-8efa-6624a36f90be,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383239096611198,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-xcrw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d1b714f-7d2a-40fe-8efa-6624a36f90be,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:33:58.150119892Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1e5fcecc5e8f04301e44a41e74261614710f736959d0e30fb42514a49a7ea62,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-377961,Ui
d:a54ff3fd272d1a6eabda674ef5ff9e9d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383231777788082,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ff3fd272d1a6eabda674ef5ff9e9d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a54ff3fd272d1a6eabda674ef5ff9e9d,kubernetes.io/config.seen: 2023-10-04T01:33:51.162519817Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:32e75854a1c4d6533faeac50c1cfe1df0af399d176802be761d4c672a399ea85,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-377961,Uid:df5aedda67fc9867e6b562838baf18ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383231772516688,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-377961,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: df5aedda67fc9867e6b562838baf18ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.28:8443,kubernetes.io/config.hash: df5aedda67fc9867e6b562838baf18ec,kubernetes.io/config.seen: 2023-10-04T01:33:51.162518381Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c4865eb99f6b72a5572a71ba285d2660b505228fa147734519e21c7d7bf92f55,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-377961,Uid:cba489baae1a42be75198173b88dcf3f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383231743868534,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba489baae1a42be75198173b88dcf3f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.28:2379,kubernetes.io/con
fig.hash: cba489baae1a42be75198173b88dcf3f,kubernetes.io/config.seen: 2023-10-04T01:33:51.162504092Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f54a77bd1b173d103f2df1d4845da0005eb5b8b6d7d9a4550fcca536d9744201,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-377961,Uid:4163b286f092263e0af18605f6f01c0f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696383231740415577,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4163b286f092263e0af18605f6f01c0f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4163b286f092263e0af18605f6f01c0f,kubernetes.io/config.seen: 2023-10-04T01:33:51.162520811Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=6f8031f0-086d-4df4-8dd1-30f3a172d53f name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.389656855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e0fdd56-ccd7-47bf-a4ec-8abba39fbc64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.389741027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e0fdd56-ccd7-47bf-a4ec-8abba39fbc64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.389936080Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:afdd6b9727b5c00b5ab529d76700cced77e2b885e47e99b98d5a24c0cfd41067,PodSandboxId:0e6691e8404424aca449e4d901ba231a7a3e1ddde2d57a0a449fd71a354ca777,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1696383246808615740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2dhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e05f356-f5cf-44b8-a421-f3b714ea1a5f,},Annotations:map[string]string{io.kubernetes.container.hash: fafb780f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c781b7006f139c6d3fceff14d78613c48fbdbdd5c240fef4d1d91b70888b21,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696383240356369396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 279a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c04e4aaf86ee12aa135eb41990dd8acb61f1d6f450157e9faaa3f167fd32d0d,PodSandboxId:2a59852bb82946883ef5c270fd15da26228bcd60e46730c0317371d074409215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1696383239795912838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcrw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d1b714f-7d2a-40fe-8efa-6624a36f90be,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7222c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a439158e66cf99b9b63cc7e7650b56c5bf893b6fcec32dd732328523740819,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696383239824621676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
9a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804dfc39aa0f4e2c5f870ee2d56c98e6e695e712646f5322db486c63a44d0c5a,PodSandboxId:c4865eb99f6b72a5572a71ba285d2660b505228fa147734519e21c7d7bf92f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1696383232695709073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba489baae1a42be75198173b88dcf3f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: ba251036,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53780ba803ee757fe4386d9e2c1531ad25b5d262580a8f6df77ce1f4780596b1,PodSandboxId:f54a77bd1b173d103f2df1d4845da0005eb5b8b6d7d9a4550fcca536d9744201,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1696383232738217658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4163b286f092263e0af18605f6f01c0f,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4ae1d7feccc8307819fc3d139275e4770671fdd06f9d3ff0e0040422a6eabc,PodSandboxId:f1e5fcecc5e8f04301e44a41e74261614710f736959d0e30fb42514a49a7ea62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1696383232610053596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ff3fd272d1a6eabda674ef5ff9e9d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52223653f9eca6c2026dcbe87b1126474beca760ae5f005811e89b85aa3f3ae1,PodSandboxId:32e75854a1c4d6533faeac50c1cfe1df0af399d176802be761d4c672a399ea85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1696383232327012199,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df5aedda67fc9867e6b562838baf18ec,},Annotations:map[string]
string{io.kubernetes.container.hash: 27e021f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e0fdd56-ccd7-47bf-a4ec-8abba39fbc64 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.410960712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eeea3ed9-6bb2-46f7-a757-38661f6ac4a8 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.411057074Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eeea3ed9-6bb2-46f7-a757-38661f6ac4a8 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.412376765Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2ae62705-51d7-47d1-807c-aa5aba5eaf07 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.412803975Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383254412790913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=2ae62705-51d7-47d1-807c-aa5aba5eaf07 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.413432227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ce816bde-2dc3-4458-b25b-0a318a21ab1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.413509045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ce816bde-2dc3-4458-b25b-0a318a21ab1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.413712570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:afdd6b9727b5c00b5ab529d76700cced77e2b885e47e99b98d5a24c0cfd41067,PodSandboxId:0e6691e8404424aca449e4d901ba231a7a3e1ddde2d57a0a449fd71a354ca777,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1696383246808615740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2dhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e05f356-f5cf-44b8-a421-f3b714ea1a5f,},Annotations:map[string]string{io.kubernetes.container.hash: fafb780f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c781b7006f139c6d3fceff14d78613c48fbdbdd5c240fef4d1d91b70888b21,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696383240356369396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 279a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c04e4aaf86ee12aa135eb41990dd8acb61f1d6f450157e9faaa3f167fd32d0d,PodSandboxId:2a59852bb82946883ef5c270fd15da26228bcd60e46730c0317371d074409215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1696383239795912838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcrw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d1b714f-7d2a-40fe-8efa-6624a36f90be,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7222c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a439158e66cf99b9b63cc7e7650b56c5bf893b6fcec32dd732328523740819,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696383239824621676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
9a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804dfc39aa0f4e2c5f870ee2d56c98e6e695e712646f5322db486c63a44d0c5a,PodSandboxId:c4865eb99f6b72a5572a71ba285d2660b505228fa147734519e21c7d7bf92f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1696383232695709073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba489baae1a42be75198173b88dcf3f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: ba251036,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53780ba803ee757fe4386d9e2c1531ad25b5d262580a8f6df77ce1f4780596b1,PodSandboxId:f54a77bd1b173d103f2df1d4845da0005eb5b8b6d7d9a4550fcca536d9744201,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1696383232738217658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4163b286f092263e0af18605f6f01c0f,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4ae1d7feccc8307819fc3d139275e4770671fdd06f9d3ff0e0040422a6eabc,PodSandboxId:f1e5fcecc5e8f04301e44a41e74261614710f736959d0e30fb42514a49a7ea62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1696383232610053596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ff3fd272d1a6eabda674ef5ff9e9d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52223653f9eca6c2026dcbe87b1126474beca760ae5f005811e89b85aa3f3ae1,PodSandboxId:32e75854a1c4d6533faeac50c1cfe1df0af399d176802be761d4c672a399ea85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1696383232327012199,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df5aedda67fc9867e6b562838baf18ec,},Annotations:map[string]
string{io.kubernetes.container.hash: 27e021f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ce816bde-2dc3-4458-b25b-0a318a21ab1e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.452377662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6b63e7f6-8da4-4ad8-9c0e-9bd127a19011 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.452460942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6b63e7f6-8da4-4ad8-9c0e-9bd127a19011 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.453424764Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=28c92d68-18de-4132-a643-97df7a3981ff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.453909888Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383254453893619,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=28c92d68-18de-4132-a643-97df7a3981ff name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.454514238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ccd6eeb0-c15a-4892-9078-9d518fe67e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.454559884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ccd6eeb0-c15a-4892-9078-9d518fe67e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.455528734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:afdd6b9727b5c00b5ab529d76700cced77e2b885e47e99b98d5a24c0cfd41067,PodSandboxId:0e6691e8404424aca449e4d901ba231a7a3e1ddde2d57a0a449fd71a354ca777,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1696383246808615740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2dhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e05f356-f5cf-44b8-a421-f3b714ea1a5f,},Annotations:map[string]string{io.kubernetes.container.hash: fafb780f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c781b7006f139c6d3fceff14d78613c48fbdbdd5c240fef4d1d91b70888b21,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696383240356369396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 279a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c04e4aaf86ee12aa135eb41990dd8acb61f1d6f450157e9faaa3f167fd32d0d,PodSandboxId:2a59852bb82946883ef5c270fd15da26228bcd60e46730c0317371d074409215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1696383239795912838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcrw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d1b714f-7d2a-40fe-8efa-6624a36f90be,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7222c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a439158e66cf99b9b63cc7e7650b56c5bf893b6fcec32dd732328523740819,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696383239824621676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
9a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804dfc39aa0f4e2c5f870ee2d56c98e6e695e712646f5322db486c63a44d0c5a,PodSandboxId:c4865eb99f6b72a5572a71ba285d2660b505228fa147734519e21c7d7bf92f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1696383232695709073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba489baae1a42be75198173b88dcf3f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: ba251036,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53780ba803ee757fe4386d9e2c1531ad25b5d262580a8f6df77ce1f4780596b1,PodSandboxId:f54a77bd1b173d103f2df1d4845da0005eb5b8b6d7d9a4550fcca536d9744201,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1696383232738217658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4163b286f092263e0af18605f6f01c0f,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4ae1d7feccc8307819fc3d139275e4770671fdd06f9d3ff0e0040422a6eabc,PodSandboxId:f1e5fcecc5e8f04301e44a41e74261614710f736959d0e30fb42514a49a7ea62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1696383232610053596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ff3fd272d1a6eabda674ef5ff9e9d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52223653f9eca6c2026dcbe87b1126474beca760ae5f005811e89b85aa3f3ae1,PodSandboxId:32e75854a1c4d6533faeac50c1cfe1df0af399d176802be761d4c672a399ea85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1696383232327012199,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df5aedda67fc9867e6b562838baf18ec,},Annotations:map[string]
string{io.kubernetes.container.hash: 27e021f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ccd6eeb0-c15a-4892-9078-9d518fe67e96 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.493406404Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=87127ecb-3e4b-44d1-98e2-6eada11fb203 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.493490411Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=87127ecb-3e4b-44d1-98e2-6eada11fb203 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.495369015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cda9b1f6-d27e-4e75-9267-4dff5b902f33 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.495809855Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383254495796731,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:77,},},},}" file="go-grpc-middleware/chain.go:25" id=cda9b1f6-d27e-4e75-9267-4dff5b902f33 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.496490153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fcf746a2-7be0-43e2-961e-16c30662095f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.496563451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fcf746a2-7be0-43e2-961e-16c30662095f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:34:14 test-preload-377961 crio[713]: time="2023-10-04 01:34:14.496770016Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:afdd6b9727b5c00b5ab529d76700cced77e2b885e47e99b98d5a24c0cfd41067,PodSandboxId:0e6691e8404424aca449e4d901ba231a7a3e1ddde2d57a0a449fd71a354ca777,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1696383246808615740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-c2dhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4e05f356-f5cf-44b8-a421-f3b714ea1a5f,},Annotations:map[string]string{io.kubernetes.container.hash: fafb780f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83c781b7006f139c6d3fceff14d78613c48fbdbdd5c240fef4d1d91b70888b21,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696383240356369396,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: 279a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c04e4aaf86ee12aa135eb41990dd8acb61f1d6f450157e9faaa3f167fd32d0d,PodSandboxId:2a59852bb82946883ef5c270fd15da26228bcd60e46730c0317371d074409215,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1696383239795912838,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xcrw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
1d1b714f-7d2a-40fe-8efa-6624a36f90be,},Annotations:map[string]string{io.kubernetes.container.hash: 9a7222c4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6a439158e66cf99b9b63cc7e7650b56c5bf893b6fcec32dd732328523740819,PodSandboxId:be05d5470cb83069e49b2de2625e4ef06c2584b6202da8265507986e97dadff2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1696383239824621676,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27
9a16db-b190-4752-aab9-b4cb9b8a2bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 98ab6d8c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:804dfc39aa0f4e2c5f870ee2d56c98e6e695e712646f5322db486c63a44d0c5a,PodSandboxId:c4865eb99f6b72a5572a71ba285d2660b505228fa147734519e21c7d7bf92f55,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1696383232695709073,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cba489baae1a42be75198173b88dcf3f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: ba251036,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53780ba803ee757fe4386d9e2c1531ad25b5d262580a8f6df77ce1f4780596b1,PodSandboxId:f54a77bd1b173d103f2df1d4845da0005eb5b8b6d7d9a4550fcca536d9744201,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1696383232738217658,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4163b286f092263e0af18605f6f01c0f,},Annotations:map[string]string{i
o.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc4ae1d7feccc8307819fc3d139275e4770671fdd06f9d3ff0e0040422a6eabc,PodSandboxId:f1e5fcecc5e8f04301e44a41e74261614710f736959d0e30fb42514a49a7ea62,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1696383232610053596,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54ff3fd272d1a6eabda674ef5ff9e9d,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52223653f9eca6c2026dcbe87b1126474beca760ae5f005811e89b85aa3f3ae1,PodSandboxId:32e75854a1c4d6533faeac50c1cfe1df0af399d176802be761d4c672a399ea85,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1696383232327012199,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-377961,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df5aedda67fc9867e6b562838baf18ec,},Annotations:map[string]
string{io.kubernetes.container.hash: 27e021f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fcf746a2-7be0-43e2-961e-16c30662095f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	afdd6b9727b5c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   0e6691e840442       coredns-6d4b75cb6d-c2dhr
	83c781b7006f1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       3                   be05d5470cb83       storage-provisioner
	b6a439158e66c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   be05d5470cb83       storage-provisioner
	5c04e4aaf86ee       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   2a59852bb8294       kube-proxy-xcrw4
	53780ba803ee7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   f54a77bd1b173       kube-scheduler-test-preload-377961
	804dfc39aa0f4       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   c4865eb99f6b7       etcd-test-preload-377961
	bc4ae1d7feccc       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   f1e5fcecc5e8f       kube-controller-manager-test-preload-377961
	52223653f9eca       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   22 seconds ago      Running             kube-apiserver            1                   32e75854a1c4d       kube-apiserver-test-preload-377961
	
	* 
	* ==> coredns [afdd6b9727b5c00b5ab529d76700cced77e2b885e47e99b98d5a24c0cfd41067] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:49782 - 12326 "HINFO IN 9057396841830173069.6736997122918478755. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.056628121s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-377961
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-377961
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=test-preload-377961
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_32_34_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:32:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-377961
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:34:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:34:07 +0000   Wed, 04 Oct 2023 01:32:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:34:07 +0000   Wed, 04 Oct 2023 01:32:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:34:07 +0000   Wed, 04 Oct 2023 01:32:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:34:07 +0000   Wed, 04 Oct 2023 01:34:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.28
	  Hostname:    test-preload-377961
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 f8f61899f6d24f8a980e972e98d5d65f
	  System UUID:                f8f61899-f6d2-4f8a-980e-972e98d5d65f
	  Boot ID:                    6fab99fc-b9ec-46d0-80e3-ab611a15c882
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-c2dhr                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-test-preload-377961                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kube-apiserver-test-preload-377961             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-test-preload-377961    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-xcrw4                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-test-preload-377961             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  110s (x5 over 110s)  kubelet          Node test-preload-377961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s (x4 over 110s)  kubelet          Node test-preload-377961 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    110s (x4 over 110s)  kubelet          Node test-preload-377961 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node test-preload-377961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node test-preload-377961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node test-preload-377961 status is now: NodeHasSufficientPID
	  Normal  NodeReady                90s                  kubelet          Node test-preload-377961 status is now: NodeReady
	  Normal  RegisteredNode           88s                  node-controller  Node test-preload-377961 event: Registered Node test-preload-377961 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-377961 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-377961 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-377961 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-377961 event: Registered Node test-preload-377961 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071020] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.368619] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.474943] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150194] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.466107] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.074328] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.111718] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.146377] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.113739] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.210541] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +25.953916] systemd-fstab-generator[1096]: Ignoring "noauto" for root device
	[  +9.305725] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 4 01:34] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [804dfc39aa0f4e2c5f870ee2d56c98e6e695e712646f5322db486c63a44d0c5a] <==
	* {"level":"info","ts":"2023-10-04T01:33:54.336Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"2fa11d851b98b853","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-10-04T01:33:54.337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 switched to configuration voters=(3432056848563877971)"}
	{"level":"info","ts":"2023-10-04T01:33:54.337Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8fc02aca6c76ee1e","local-member-id":"2fa11d851b98b853","added-peer-id":"2fa11d851b98b853","added-peer-peer-urls":["https://192.168.39.28:2380"]}
	{"level":"info","ts":"2023-10-04T01:33:54.337Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8fc02aca6c76ee1e","local-member-id":"2fa11d851b98b853","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:33:54.337Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:33:54.339Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"2fa11d851b98b853","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-10-04T01:33:54.345Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-04T01:33:54.345Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.28:2380"}
	{"level":"info","ts":"2023-10-04T01:33:54.345Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.28:2380"}
	{"level":"info","ts":"2023-10-04T01:33:54.346Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:33:54.346Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"2fa11d851b98b853","initial-advertise-peer-urls":["https://192.168.39.28:2380"],"listen-peer-urls":["https://192.168.39.28:2380"],"advertise-client-urls":["https://192.168.39.28:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.28:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 received MsgPreVoteResp from 2fa11d851b98b853 at term 2"}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 received MsgVoteResp from 2fa11d851b98b853 at term 3"}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2fa11d851b98b853 became leader at term 3"}
	{"level":"info","ts":"2023-10-04T01:33:54.976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2fa11d851b98b853 elected leader 2fa11d851b98b853 at term 3"}
	{"level":"info","ts":"2023-10-04T01:33:54.978Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"2fa11d851b98b853","local-member-attributes":"{Name:test-preload-377961 ClientURLs:[https://192.168.39.28:2379]}","request-path":"/0/members/2fa11d851b98b853/attributes","cluster-id":"8fc02aca6c76ee1e","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:33:54.978Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:33:54.980Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:33:54.983Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:33:54.986Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.28:2379"}
	{"level":"info","ts":"2023-10-04T01:33:54.990Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:33:54.990Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  01:34:14 up 1 min,  0 users,  load average: 0.73, 0.25, 0.09
	Linux test-preload-377961 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [52223653f9eca6c2026dcbe87b1126474beca760ae5f005811e89b85aa3f3ae1] <==
	* I1004 01:33:57.610453       1 controller.go:85] Starting OpenAPI V3 controller
	I1004 01:33:57.610568       1 naming_controller.go:291] Starting NamingConditionController
	I1004 01:33:57.611900       1 establishing_controller.go:76] Starting EstablishingController
	I1004 01:33:57.612042       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1004 01:33:57.612237       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1004 01:33:57.612276       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1004 01:33:57.658620       1 cache.go:39] Caches are synced for autoregister controller
	I1004 01:33:57.661042       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E1004 01:33:57.671632       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1004 01:33:57.707619       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1004 01:33:57.733000       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 01:33:57.733340       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1004 01:33:57.733722       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 01:33:57.733891       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1004 01:33:57.741542       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1004 01:33:58.216909       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1004 01:33:58.570535       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1004 01:33:59.379112       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1004 01:33:59.390438       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1004 01:33:59.429110       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1004 01:33:59.447805       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1004 01:33:59.456952       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1004 01:34:00.254969       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1004 01:34:10.253051       1 controller.go:611] quota admission added evaluator for: endpoints
	I1004 01:34:10.352634       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [bc4ae1d7feccc8307819fc3d139275e4770671fdd06f9d3ff0e0040422a6eabc] <==
	* I1004 01:34:10.322536       1 shared_informer.go:262] Caches are synced for node
	I1004 01:34:10.322618       1 range_allocator.go:173] Starting range CIDR allocator
	I1004 01:34:10.322647       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1004 01:34:10.322674       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1004 01:34:10.325904       1 shared_informer.go:262] Caches are synced for attach detach
	I1004 01:34:10.339782       1 shared_informer.go:262] Caches are synced for GC
	I1004 01:34:10.343453       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1004 01:34:10.346237       1 shared_informer.go:262] Caches are synced for persistent volume
	I1004 01:34:10.350245       1 shared_informer.go:262] Caches are synced for TTL
	I1004 01:34:10.353796       1 shared_informer.go:262] Caches are synced for taint
	I1004 01:34:10.353874       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1004 01:34:10.354006       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1004 01:34:10.354070       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-377961. Assuming now as a timestamp.
	I1004 01:34:10.354110       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1004 01:34:10.354254       1 event.go:294] "Event occurred" object="test-preload-377961" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-377961 event: Registered Node test-preload-377961 in Controller"
	I1004 01:34:10.356102       1 shared_informer.go:262] Caches are synced for daemon sets
	I1004 01:34:10.426950       1 shared_informer.go:262] Caches are synced for disruption
	I1004 01:34:10.427061       1 disruption.go:371] Sending events to api server.
	I1004 01:34:10.436347       1 shared_informer.go:262] Caches are synced for deployment
	I1004 01:34:10.445274       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1004 01:34:10.463228       1 shared_informer.go:262] Caches are synced for resource quota
	I1004 01:34:10.494872       1 shared_informer.go:262] Caches are synced for resource quota
	I1004 01:34:10.904812       1 shared_informer.go:262] Caches are synced for garbage collector
	I1004 01:34:10.950991       1 shared_informer.go:262] Caches are synced for garbage collector
	I1004 01:34:10.951037       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [5c04e4aaf86ee12aa135eb41990dd8acb61f1d6f450157e9faaa3f167fd32d0d] <==
	* I1004 01:34:00.202702       1 node.go:163] Successfully retrieved node IP: 192.168.39.28
	I1004 01:34:00.202787       1 server_others.go:138] "Detected node IP" address="192.168.39.28"
	I1004 01:34:00.202815       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1004 01:34:00.246665       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1004 01:34:00.246706       1 server_others.go:206] "Using iptables Proxier"
	I1004 01:34:00.247486       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1004 01:34:00.247891       1 server.go:661] "Version info" version="v1.24.4"
	I1004 01:34:00.247925       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:34:00.248998       1 config.go:317] "Starting service config controller"
	I1004 01:34:00.249333       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1004 01:34:00.249408       1 config.go:226] "Starting endpoint slice config controller"
	I1004 01:34:00.249417       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1004 01:34:00.251257       1 config.go:444] "Starting node config controller"
	I1004 01:34:00.251382       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1004 01:34:00.349818       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1004 01:34:00.349911       1 shared_informer.go:262] Caches are synced for service config
	I1004 01:34:00.352304       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [53780ba803ee757fe4386d9e2c1531ad25b5d262580a8f6df77ce1f4780596b1] <==
	* I1004 01:33:54.641506       1 serving.go:348] Generated self-signed cert in-memory
	W1004 01:33:57.593856       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 01:33:57.594549       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:33:57.594570       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 01:33:57.594656       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 01:33:57.674802       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1004 01:33:57.674854       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:33:57.678651       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 01:33:57.678827       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 01:33:57.678844       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:33:57.678869       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 01:33:57.779308       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:33:16 UTC, ends at Wed 2023-10-04 01:34:15 UTC. --
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.233586    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d1b714f-7d2a-40fe-8efa-6624a36f90be-xtables-lock\") pod \"kube-proxy-xcrw4\" (UID: \"1d1b714f-7d2a-40fe-8efa-6624a36f90be\") " pod="kube-system/kube-proxy-xcrw4"
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.233708    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kpmt\" (UniqueName: \"kubernetes.io/projected/1d1b714f-7d2a-40fe-8efa-6624a36f90be-kube-api-access-6kpmt\") pod \"kube-proxy-xcrw4\" (UID: \"1d1b714f-7d2a-40fe-8efa-6624a36f90be\") " pod="kube-system/kube-proxy-xcrw4"
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.233806    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume\") pod \"coredns-6d4b75cb6d-c2dhr\" (UID: \"4e05f356-f5cf-44b8-a421-f3b714ea1a5f\") " pod="kube-system/coredns-6d4b75cb6d-c2dhr"
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.233831    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884br\" (UniqueName: \"kubernetes.io/projected/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-kube-api-access-884br\") pod \"coredns-6d4b75cb6d-c2dhr\" (UID: \"4e05f356-f5cf-44b8-a421-f3b714ea1a5f\") " pod="kube-system/coredns-6d4b75cb6d-c2dhr"
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.233851    1102 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlk6g\" (UniqueName: \"kubernetes.io/projected/279a16db-b190-4752-aab9-b4cb9b8a2bfc-kube-api-access-tlk6g\") pod \"storage-provisioner\" (UID: \"279a16db-b190-4752-aab9-b4cb9b8a2bfc\") " pod="kube-system/storage-provisioner"
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.233876    1102 reconciler.go:159] "Reconciler: start to sync state"
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.542767    1102 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/489b4920-2bbf-4ba8-bc07-37274d8b480c-config-volume\") pod \"489b4920-2bbf-4ba8-bc07-37274d8b480c\" (UID: \"489b4920-2bbf-4ba8-bc07-37274d8b480c\") "
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.542852    1102 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94z8h\" (UniqueName: \"kubernetes.io/projected/489b4920-2bbf-4ba8-bc07-37274d8b480c-kube-api-access-94z8h\") pod \"489b4920-2bbf-4ba8-bc07-37274d8b480c\" (UID: \"489b4920-2bbf-4ba8-bc07-37274d8b480c\") "
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: E1004 01:33:58.544140    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: E1004 01:33:58.544284    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume podName:4e05f356-f5cf-44b8-a421-f3b714ea1a5f nodeName:}" failed. No retries permitted until 2023-10-04 01:33:59.044265294 +0000 UTC m=+8.031407327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume") pod "coredns-6d4b75cb6d-c2dhr" (UID: "4e05f356-f5cf-44b8-a421-f3b714ea1a5f") : object "kube-system"/"coredns" not registered
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: W1004 01:33:58.545919    1102 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/489b4920-2bbf-4ba8-bc07-37274d8b480c/volumes/kubernetes.io~projected/kube-api-access-94z8h: clearQuota called, but quotas disabled
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: W1004 01:33:58.545992    1102 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/489b4920-2bbf-4ba8-bc07-37274d8b480c/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.546282    1102 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489b4920-2bbf-4ba8-bc07-37274d8b480c-kube-api-access-94z8h" (OuterVolumeSpecName: "kube-api-access-94z8h") pod "489b4920-2bbf-4ba8-bc07-37274d8b480c" (UID: "489b4920-2bbf-4ba8-bc07-37274d8b480c"). InnerVolumeSpecName "kube-api-access-94z8h". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.546849    1102 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/489b4920-2bbf-4ba8-bc07-37274d8b480c-config-volume" (OuterVolumeSpecName: "config-volume") pod "489b4920-2bbf-4ba8-bc07-37274d8b480c" (UID: "489b4920-2bbf-4ba8-bc07-37274d8b480c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.643650    1102 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/489b4920-2bbf-4ba8-bc07-37274d8b480c-config-volume\") on node \"test-preload-377961\" DevicePath \"\""
	Oct 04 01:33:58 test-preload-377961 kubelet[1102]: I1004 01:33:58.643677    1102 reconciler.go:384] "Volume detached for volume \"kube-api-access-94z8h\" (UniqueName: \"kubernetes.io/projected/489b4920-2bbf-4ba8-bc07-37274d8b480c-kube-api-access-94z8h\") on node \"test-preload-377961\" DevicePath \"\""
	Oct 04 01:33:59 test-preload-377961 kubelet[1102]: E1004 01:33:59.045297    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 04 01:33:59 test-preload-377961 kubelet[1102]: E1004 01:33:59.045408    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume podName:4e05f356-f5cf-44b8-a421-f3b714ea1a5f nodeName:}" failed. No retries permitted until 2023-10-04 01:34:00.045392513 +0000 UTC m=+9.032534543 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume") pod "coredns-6d4b75cb6d-c2dhr" (UID: "4e05f356-f5cf-44b8-a421-f3b714ea1a5f") : object "kube-system"/"coredns" not registered
	Oct 04 01:34:00 test-preload-377961 kubelet[1102]: E1004 01:34:00.052923    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 04 01:34:00 test-preload-377961 kubelet[1102]: E1004 01:34:00.053008    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume podName:4e05f356-f5cf-44b8-a421-f3b714ea1a5f nodeName:}" failed. No retries permitted until 2023-10-04 01:34:02.052993512 +0000 UTC m=+11.040135531 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume") pod "coredns-6d4b75cb6d-c2dhr" (UID: "4e05f356-f5cf-44b8-a421-f3b714ea1a5f") : object "kube-system"/"coredns" not registered
	Oct 04 01:34:00 test-preload-377961 kubelet[1102]: E1004 01:34:00.276713    1102 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-c2dhr" podUID=4e05f356-f5cf-44b8-a421-f3b714ea1a5f
	Oct 04 01:34:00 test-preload-377961 kubelet[1102]: I1004 01:34:00.336926    1102 scope.go:110] "RemoveContainer" containerID="b6a439158e66cf99b9b63cc7e7650b56c5bf893b6fcec32dd732328523740819"
	Oct 04 01:34:01 test-preload-377961 kubelet[1102]: I1004 01:34:01.280810    1102 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=489b4920-2bbf-4ba8-bc07-37274d8b480c path="/var/lib/kubelet/pods/489b4920-2bbf-4ba8-bc07-37274d8b480c/volumes"
	Oct 04 01:34:02 test-preload-377961 kubelet[1102]: E1004 01:34:02.069453    1102 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 04 01:34:02 test-preload-377961 kubelet[1102]: E1004 01:34:02.069554    1102 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume podName:4e05f356-f5cf-44b8-a421-f3b714ea1a5f nodeName:}" failed. No retries permitted until 2023-10-04 01:34:06.069537418 +0000 UTC m=+15.056679437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4e05f356-f5cf-44b8-a421-f3b714ea1a5f-config-volume") pod "coredns-6d4b75cb6d-c2dhr" (UID: "4e05f356-f5cf-44b8-a421-f3b714ea1a5f") : object "kube-system"/"coredns" not registered
	
	* 
	* ==> storage-provisioner [83c781b7006f139c6d3fceff14d78613c48fbdbdd5c240fef4d1d91b70888b21] <==
	* I1004 01:34:00.509739       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:34:00.525759       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:34:00.525813       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [b6a439158e66cf99b9b63cc7e7650b56c5bf893b6fcec32dd732328523740819] <==
	* I1004 01:34:00.033762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 01:34:00.037346       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-377961 -n test-preload-377961
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-377961 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-377961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-377961
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-377961: (1.103965717s)
--- FAIL: TestPreload (182.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (8.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.856337425.exe start -p running-upgrade-460129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1004 01:36:18.426416  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.856337425.exe start -p running-upgrade-460129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (5.028690168s)

                                                
                                                
-- stdout --
	! [running-upgrade-460129] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig480124634
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading driver docker-machine-driver-kvm2:
	* Downloading VM boot image ...
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > docker-machine-driver-kvm2.sha256: 65 B / 65 B [-------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 13.86 MiB / 13.86 MiB [----] 100.00% ? p/s 0s    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 35.50 MiB / 150.93 MiB [-->_________] 23.52% ? p/s ?    > minikube-v1.6.0.iso: 96.96 MiB / 150.93 MiB [------->____] 64.24% ? p/s ?    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 383.95 MiB p/s 1s* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible: create: Error creating machine: Error in driver during machine creation: creating network: creating network minikube-net: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.856337425.exe start -p running-upgrade-460129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.856337425.exe start -p running-upgrade-460129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 78 (130.178839ms)

                                                
                                                
-- stdout --
	* [running-upgrade-460129] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2047280675
	* Selecting 'kvm2' driver from user configuration (alternates: [none])

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible
	* Error: [KVM2_NO_DOMAIN] Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'running-upgrade-460129'')
	* Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	* Related issues:
	  - https://github.com/kubernetes/minikube/issues/3636
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.856337425.exe start -p running-upgrade-460129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.856337425.exe start -p running-upgrade-460129 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 78 (113.436394ms)

                                                
                                                
-- stdout --
	* [running-upgrade-460129] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2624411237
	* Selecting 'kvm2' driver from user configuration (alternates: [none])

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible
	* Error: [KVM2_NO_DOMAIN] Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'running-upgrade-460129'')
	* Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	* Related issues:
	  - https://github.com/kubernetes/minikube/issues/3636
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:139: legacy v1.6.2 start failed: exit status 78
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-04 01:36:25.985439534 +0000 UTC m=+3178.356470567
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-460129 -n running-upgrade-460129
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-460129 -n running-upgrade-460129: exit status 85 (48.108259ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node running-upgrade-460129
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_status_8980859c28362053cbc8940f41f258f108f0854e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-460129" host is not running, skipping log retrieval (state="")
helpers_test.go:175: Cleaning up "running-upgrade-460129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-460129
--- FAIL: TestRunningBinaryUpgrade (8.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (90.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-389799 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-389799 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 85 (1m17.193052698s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-389799] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node kubernetes-upgrade-389799 in cluster kubernetes-upgrade-389799
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Restarting existing kvm2 VM for "kubernetes-upgrade-389799" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:36:17.372282  158162 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:36:17.372618  158162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:36:17.372626  158162 out.go:309] Setting ErrFile to fd 2...
	I1004 01:36:17.372632  158162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:36:17.372909  158162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:36:17.374029  158162 out.go:303] Setting JSON to false
	I1004 01:36:17.374926  158162 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8329,"bootTime":1696375049,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:36:17.374990  158162 start.go:138] virtualization: kvm guest
	I1004 01:36:17.376425  158162 out.go:177] * [kubernetes-upgrade-389799] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:36:17.378357  158162 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:36:17.379766  158162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:36:17.378450  158162 notify.go:220] Checking for updates...
	I1004 01:36:17.382432  158162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:36:17.385293  158162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:36:17.387395  158162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:36:17.388804  158162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:36:17.390183  158162 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:36:17.426267  158162 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:36:17.427520  158162 start.go:298] selected driver: kvm2
	I1004 01:36:17.427528  158162 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:36:17.427538  158162 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:36:17.428228  158162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:36:19.346122  158162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:36:19.361349  158162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:36:19.361408  158162 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 01:36:19.361816  158162 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 01:36:19.361879  158162 cni.go:84] Creating CNI manager for ""
	I1004 01:36:19.361894  158162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:36:19.361903  158162 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 01:36:19.361915  158162 start_flags.go:321] config:
	{Name:kubernetes-upgrade-389799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-389799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:36:19.362059  158162 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:36:19.363851  158162 out.go:177] * Starting control plane node kubernetes-upgrade-389799 in cluster kubernetes-upgrade-389799
	I1004 01:36:19.365223  158162 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1004 01:36:19.365258  158162 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1004 01:36:19.365267  158162 cache.go:57] Caching tarball of preloaded images
	I1004 01:36:19.365326  158162 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:36:19.365338  158162 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1004 01:36:19.365687  158162 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kubernetes-upgrade-389799/config.json ...
	I1004 01:36:19.365709  158162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kubernetes-upgrade-389799/config.json: {Name:mk29f7593bcfabc65103ae8e2b911979cc3abb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:36:19.365850  158162 start.go:365] acquiring machines lock for kubernetes-upgrade-389799: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:36:38.082793  158162 start.go:369] acquired machines lock for "kubernetes-upgrade-389799" in 18.716873829s
	I1004 01:36:38.082860  158162 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-389799 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-389799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:36:38.082969  158162 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 01:36:38.085061  158162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1004 01:36:38.085237  158162 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:36:38.085275  158162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:36:38.102466  158162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40085
	I1004 01:36:38.102990  158162 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:36:38.103562  158162 main.go:141] libmachine: Using API Version  1
	I1004 01:36:38.103584  158162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:36:38.103988  158162 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:36:38.104183  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .GetMachineName
	I1004 01:36:38.104371  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .DriverName
	I1004 01:36:38.104535  158162 start.go:159] libmachine.API.Create for "kubernetes-upgrade-389799" (driver="kvm2")
	I1004 01:36:38.104567  158162 client.go:168] LocalClient.Create starting
	I1004 01:36:38.104601  158162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 01:36:38.104636  158162 main.go:141] libmachine: Decoding PEM data...
	I1004 01:36:38.104656  158162 main.go:141] libmachine: Parsing certificate...
	I1004 01:36:38.104726  158162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 01:36:38.104755  158162 main.go:141] libmachine: Decoding PEM data...
	I1004 01:36:38.104776  158162 main.go:141] libmachine: Parsing certificate...
	I1004 01:36:38.104802  158162 main.go:141] libmachine: Running pre-create checks...
	I1004 01:36:38.104815  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .PreCreateCheck
	I1004 01:36:38.105221  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .GetConfigRaw
	I1004 01:36:38.105638  158162 main.go:141] libmachine: Creating machine...
	I1004 01:36:38.105655  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .Create
	I1004 01:36:38.105807  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Creating KVM machine...
	I1004 01:36:38.112973  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) KVM machine creation complete!
	I1004 01:36:38.113036  158162 client.go:171] LocalClient.Create took 8.450407ms
	I1004 01:36:40.113947  158162 start.go:128] duration metric: createHost completed in 2.030958895s
	I1004 01:36:40.113978  158162 start.go:83] releasing machines lock for "kubernetes-upgrade-389799", held for 2.031154611s
	W1004 01:36:40.114004  158162 start.go:688] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: creating network: creating network mk-kubernetes-upgrade-389799: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	I1004 01:36:40.114479  158162 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:36:40.114517  158162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:36:40.133737  158162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43095
	I1004 01:36:40.134280  158162 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:36:40.134835  158162 main.go:141] libmachine: Using API Version  1
	I1004 01:36:40.134860  158162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:36:40.135180  158162 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:36:40.135769  158162 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:36:40.135811  158162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:36:40.160967  158162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43987
	I1004 01:36:40.161584  158162 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:36:40.162193  158162 main.go:141] libmachine: Using API Version  1
	I1004 01:36:40.162219  158162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:36:40.162503  158162 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:36:40.162627  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .GetState
	I1004 01:36:40.171500  158162 delete.go:82] Unable to get host status for kubernetes-upgrade-389799, assuming it has already been deleted: state: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	W1004 01:36:40.171592  158162 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: creating network: creating network mk-kubernetes-upgrade-389799: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: creating network: creating network mk-kubernetes-upgrade-389799: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	I1004 01:36:40.171628  158162 start.go:703] Will try again in 5 seconds ...
	I1004 01:36:45.171806  158162 start.go:365] acquiring machines lock for kubernetes-upgrade-389799: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:37:34.399090  158162 start.go:369] acquired machines lock for "kubernetes-upgrade-389799" in 49.227211862s
	I1004 01:37:34.399167  158162 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:37:34.399182  158162 fix.go:54] fixHost starting: 
	I1004 01:37:34.399535  158162 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:37:34.399611  158162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:37:34.418891  158162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43989
	I1004 01:37:34.419304  158162 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:37:34.419753  158162 main.go:141] libmachine: Using API Version  1
	I1004 01:37:34.419777  158162 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:37:34.420200  158162 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:37:34.420373  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .DriverName
	I1004 01:37:34.420561  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .GetState
	I1004 01:37:34.422091  158162 fix.go:102] recreateIfNeeded on kubernetes-upgrade-389799: state=Error err=getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	I1004 01:37:34.422147  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .DriverName
	I1004 01:37:34.422328  158162 fix.go:107] machineExists: true. err=getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	W1004 01:37:34.422348  158162 fix.go:128] unexpected machine state, will restart: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	I1004 01:37:34.424710  158162 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-389799" ...
	I1004 01:37:34.426118  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Calling .Start
	I1004 01:37:34.426351  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Ensuring networks are active...
	I1004 01:37:34.427287  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Ensuring network default is active
	I1004 01:37:34.427698  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Ensuring network mk-kubernetes-upgrade-389799 is active
	I1004 01:37:34.507276  158162 main.go:141] libmachine: (kubernetes-upgrade-389799) Getting domain xml...
	I1004 01:37:34.508326  158162 fix.go:56] fixHost completed within 109.145659ms
	I1004 01:37:34.508346  158162 start.go:83] releasing machines lock for "kubernetes-upgrade-389799", held for 109.224131ms
	W1004 01:37:34.508445  158162 out.go:239] * Failed to start kvm2 VM. Running "minikube delete -p kubernetes-upgrade-389799" may fix it: driver start: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	* Failed to start kvm2 VM. Running "minikube delete -p kubernetes-upgrade-389799" may fix it: driver start: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	I1004 01:37:34.511746  158162 out.go:177] 
	W1004 01:37:34.513196  158162 out.go:239] X Exiting due to GUEST_KVM2_NO_DOMAIN: Failed to start host: driver start: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	X Exiting due to GUEST_KVM2_NO_DOMAIN: Failed to start host: driver start: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	W1004 01:37:34.513228  158162 out.go:239] * Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	* Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	W1004 01:37:34.513251  158162 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/3636
	* Related issue: https://github.com/kubernetes/minikube/issues/3636
	I1004 01:37:34.514788  158162 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-389799 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 85
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-389799
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p kubernetes-upgrade-389799: exit status 85 (13.055673031s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-389799"  ...
	* Stopping node "kubernetes-upgrade-389799"  ...
	* Stopping node "kubernetes-upgrade-389799"  ...
	* Stopping node "kubernetes-upgrade-389799"  ...
	* Stopping node "kubernetes-upgrade-389799"  ...
	* Stopping node "kubernetes-upgrade-389799"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_KVM2_NO_DOMAIN: Temporary Error: stop: getting state of VM: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')
	* Suggestion: The VM that minikube is configured for no longer exists. Run 'minikube delete'
	* Related issue: https://github.com/kubernetes/minikube/issues/3636

                                                
                                                
** /stderr **
version_upgrade_test.go:242: out/minikube-linux-amd64 stop -p kubernetes-upgrade-389799 failed: exit status 85
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-10-04 01:37:47.577036939 +0000 UTC m=+3259.948067980
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-389799 -n kubernetes-upgrade-389799
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-389799 -n kubernetes-upgrade-389799: exit status 7 (80.503085ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:37:47.653814  159871 status.go:249] status error: host: state: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'kubernetes-upgrade-389799'')

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-389799" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-389799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-389799
--- FAIL: TestKubernetesUpgrade (90.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (4.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1358558199.exe start -p stopped-upgrade-650200 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.1358558199.exe start -p stopped-upgrade-650200 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (2.534230421s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-650200] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2398600012
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible: create: Error creating machine: Error in driver during machine creation: ensuring active networks: starting network minikube-net: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1358558199.exe start -p stopped-upgrade-650200 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.1358558199.exe start -p stopped-upgrade-650200 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (122.600558ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-650200] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1885461097
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Starting existing kvm2 VM for "stopped-upgrade-650200" ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible: start: ensuring active networks: starting network minikube-net: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1358558199.exe start -p stopped-upgrade-650200 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.1358558199.exe start -p stopped-upgrade-650200 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (119.170018ms)

                                                
                                                
-- stdout --
	* [stopped-upgrade-650200] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2037821404
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Starting existing kvm2 VM for "stopped-upgrade-650200" ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Unable to start VM. Please investigate and run 'minikube delete' if possible: start: ensuring active networks: starting network minikube-net: virError(Code=1, Domain=0, Message='internal error: Network is already in use by interface virbr1')
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.6.2 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (4.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-650200
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p stopped-upgrade-650200: exit status 89 (86.131498ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823 sudo cat                                                               |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m02 sudo cat                                   | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823_multinode-038823-m02.txt                          |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823:/home/docker/cp-test.txt                           | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03:/home/docker/cp-test_multinode-038823_multinode-038823-m03.txt     |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823 sudo cat                                                               |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m03 sudo cat                                   | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823_multinode-038823-m03.txt                          |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp testdata/cp-test.txt                                                | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02:/home/docker/cp-test.txt                                           |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823-m02.txt           |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823:/home/docker/cp-test_multinode-038823-m02_multinode-038823.txt         |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823 sudo cat                                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m02_multinode-038823.txt                          |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03:/home/docker/cp-test_multinode-038823-m02_multinode-038823-m03.txt |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m03 sudo cat                                   | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m02_multinode-038823-m03.txt                      |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp testdata/cp-test.txt                                                | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03:/home/docker/cp-test.txt                                           |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823-m03.txt           |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823:/home/docker/cp-test_multinode-038823-m03_multinode-038823.txt         |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823 sudo cat                                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m03_multinode-038823.txt                          |                           |         |         |                     |                     |
	| cp      | multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt                       | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m02:/home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | multinode-038823-m03 sudo cat                                                           |                           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                           |         |         |                     |                     |
	| ssh     | multinode-038823 ssh -n multinode-038823-m02 sudo cat                                   | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	|         | /home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt                      |                           |         |         |                     |                     |
	| node    | multinode-038823 node stop m03                                                          | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:08 UTC |
	| node    | multinode-038823 node start                                                             | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:08 UTC | 04 Oct 23 01:09 UTC |
	|         | m03 --alsologtostderr                                                                   |                           |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:09 UTC |                     |
	| stop    | -p multinode-038823                                                                     | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:09 UTC |                     |
	| start   | -p multinode-038823                                                                     | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:11 UTC | 04 Oct 23 01:20 UTC |
	|         | --wait=true -v=8                                                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                           |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC |                     |
	| node    | multinode-038823 node delete                                                            | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC | 04 Oct 23 01:20 UTC |
	|         | m03                                                                                     |                           |         |         |                     |                     |
	| stop    | multinode-038823 stop                                                                   | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:20 UTC |                     |
	| start   | -p multinode-038823                                                                     | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:22 UTC | 04 Oct 23 01:30 UTC |
	|         | --wait=true -v=8                                                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                           |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	| node    | list -p multinode-038823                                                                | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:30 UTC |                     |
	| start   | -p multinode-038823-m02                                                                 | multinode-038823-m02      | jenkins | v1.31.2 | 04 Oct 23 01:30 UTC |                     |
	|         | --driver=kvm2                                                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	| start   | -p multinode-038823-m03                                                                 | multinode-038823-m03      | jenkins | v1.31.2 | 04 Oct 23 01:30 UTC | 04 Oct 23 01:31 UTC |
	|         | --driver=kvm2                                                                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	| node    | add -p multinode-038823                                                                 | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC |                     |
	| delete  | -p multinode-038823-m03                                                                 | multinode-038823-m03      | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC | 04 Oct 23 01:31 UTC |
	| delete  | -p multinode-038823                                                                     | multinode-038823          | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC | 04 Oct 23 01:31 UTC |
	| start   | -p test-preload-377961                                                                  | test-preload-377961       | jenkins | v1.31.2 | 04 Oct 23 01:31 UTC | 04 Oct 23 01:32 UTC |
	|         | --memory=2200                                                                           |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                           |         |         |                     |                     |
	| image   | test-preload-377961 image pull                                                          | test-preload-377961       | jenkins | v1.31.2 | 04 Oct 23 01:32 UTC | 04 Oct 23 01:32 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                           |         |         |                     |                     |
	| stop    | -p test-preload-377961                                                                  | test-preload-377961       | jenkins | v1.31.2 | 04 Oct 23 01:32 UTC | 04 Oct 23 01:33 UTC |
	| start   | -p test-preload-377961                                                                  | test-preload-377961       | jenkins | v1.31.2 | 04 Oct 23 01:33 UTC | 04 Oct 23 01:34 UTC |
	|         | --memory=2200                                                                           |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	| image   | test-preload-377961 image list                                                          | test-preload-377961       | jenkins | v1.31.2 | 04 Oct 23 01:34 UTC | 04 Oct 23 01:34 UTC |
	| delete  | -p test-preload-377961                                                                  | test-preload-377961       | jenkins | v1.31.2 | 04 Oct 23 01:34 UTC | 04 Oct 23 01:34 UTC |
	| start   | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:34 UTC | 04 Oct 23 01:35 UTC |
	|         | --memory=2048 --driver=kvm2                                                             |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 5m                                                                           |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 5m                                                                           |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 5m                                                                           |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 15s                                                                          |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 15s                                                                          |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 15s                                                                          |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC | 04 Oct 23 01:35 UTC |
	|         | --cancel-scheduled                                                                      |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 15s                                                                          |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC |                     |
	|         | --schedule 15s                                                                          |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:35 UTC | 04 Oct 23 01:35 UTC |
	|         | --schedule 15s                                                                          |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-934910                                                                | scheduled-stop-934910     | jenkins | v1.31.2 | 04 Oct 23 01:36 UTC | 04 Oct 23 01:36 UTC |
	| start   | -p offline-crio-398840                                                                  | offline-crio-398840       | jenkins | v1.31.2 | 04 Oct 23 01:36 UTC |                     |
	|         | --alsologtostderr                                                                       |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048                                                                      |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-389799                                                            | kubernetes-upgrade-389799 | jenkins | v1.31.2 | 04 Oct 23 01:36 UTC |                     |
	|         | --memory=2200                                                                           |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                            |                           |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                                                      |                           |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                           |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:36:17
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:36:17.369997  158163 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:36:17.370257  158163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:36:17.370264  158163 out.go:309] Setting ErrFile to fd 2...
	I1004 01:36:17.370271  158163 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:36:17.370602  158163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:36:17.371486  158163 out.go:303] Setting JSON to false
	I1004 01:36:17.372574  158163 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8329,"bootTime":1696375049,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:36:17.372662  158163 start.go:138] virtualization: kvm guest
	I1004 01:36:17.372282  158162 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:36:17.372618  158162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:36:17.372626  158162 out.go:309] Setting ErrFile to fd 2...
	I1004 01:36:17.372632  158162 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:36:17.372909  158162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:36:17.374029  158162 out.go:303] Setting JSON to false
	I1004 01:36:17.374926  158162 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8329,"bootTime":1696375049,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:36:17.374990  158162 start.go:138] virtualization: kvm guest
	I1004 01:36:17.375366  158163 out.go:177] * [offline-crio-398840] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:36:17.376425  158162 out.go:177] * [kubernetes-upgrade-389799] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:36:17.378252  158163 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:36:17.377470  158163 notify.go:220] Checking for updates...
	I1004 01:36:17.378357  158162 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:36:17.379766  158162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:36:17.379811  158163 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:36:17.378450  158162 notify.go:220] Checking for updates...
	I1004 01:36:17.381138  158163 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:36:17.382432  158162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:36:17.383815  158163 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:36:17.385293  158162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:36:17.385400  158163 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:36:17.387395  158162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:36:17.387420  158163 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:36:17.388804  158162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:36:17.389094  158163 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:36:17.390183  158162 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:36:17.426239  158163 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:36:17.426267  158162 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:36:17.427489  158163 start.go:298] selected driver: kvm2
	I1004 01:36:17.427502  158163 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:36:17.427513  158163 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:36:17.428182  158163 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:36:19.330714  158163 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:36:19.345971  158163 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:36:19.346044  158163 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 01:36:19.346340  158163 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:36:19.346381  158163 cni.go:84] Creating CNI manager for ""
	I1004 01:36:19.346396  158163 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:36:19.346410  158163 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 01:36:19.346421  158163 start_flags.go:321] config:
	{Name:offline-crio-398840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:offline-crio-398840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:36:19.346641  158163 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:36:19.350424  158163 out.go:177] * Starting control plane node offline-crio-398840 in cluster offline-crio-398840
	I1004 01:36:19.351729  158163 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:36:19.351771  158163 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:36:19.351790  158163 cache.go:57] Caching tarball of preloaded images
	I1004 01:36:19.351886  158163 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:36:19.351900  158163 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:36:19.352224  158163 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/offline-crio-398840/config.json ...
	I1004 01:36:19.352248  158163 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/offline-crio-398840/config.json: {Name:mk68ee4918b90e8e6fe42e4fdbe3469f657e85c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:36:19.352403  158163 start.go:365] acquiring machines lock for offline-crio-398840: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:36:19.352464  158163 start.go:369] acquired machines lock for "offline-crio-398840" in 41.216µs
	I1004 01:36:19.352491  158163 start.go:93] Provisioning new machine with config: &{Name:offline-crio-398840 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.2 ClusterName:offline-crio-398840 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:36:19.352552  158163 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 01:36:17.427520  158162 start.go:298] selected driver: kvm2
	I1004 01:36:17.427528  158162 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:36:17.427538  158162 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:36:17.428228  158162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:36:19.346122  158162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:36:19.361349  158162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:36:19.361408  158162 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 01:36:19.361816  158162 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 01:36:19.361879  158162 cni.go:84] Creating CNI manager for ""
	I1004 01:36:19.361894  158162 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:36:19.361903  158162 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 01:36:19.361915  158162 start_flags.go:321] config:
	{Name:kubernetes-upgrade-389799 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-389799 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:36:19.362059  158162 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:36:19.363851  158162 out.go:177] * Starting control plane node kubernetes-upgrade-389799 in cluster kubernetes-upgrade-389799
	I1004 01:36:19.354300  158163 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1004 01:36:19.354432  158163 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:36:19.354474  158163 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:36:19.370974  158163 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40757
	I1004 01:36:19.371538  158163 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:36:19.372165  158163 main.go:141] libmachine: Using API Version  1
	I1004 01:36:19.372190  158163 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:36:19.372615  158163 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:36:19.372807  158163 main.go:141] libmachine: (offline-crio-398840) Calling .GetMachineName
	I1004 01:36:19.372948  158163 main.go:141] libmachine: (offline-crio-398840) Calling .DriverName
	I1004 01:36:19.373104  158163 start.go:159] libmachine.API.Create for "offline-crio-398840" (driver="kvm2")
	I1004 01:36:19.373138  158163 client.go:168] LocalClient.Create starting
	I1004 01:36:19.373172  158163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 01:36:19.373207  158163 main.go:141] libmachine: Decoding PEM data...
	I1004 01:36:19.373221  158163 main.go:141] libmachine: Parsing certificate...
	I1004 01:36:19.373329  158163 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 01:36:19.373352  158163 main.go:141] libmachine: Decoding PEM data...
	I1004 01:36:19.373365  158163 main.go:141] libmachine: Parsing certificate...
	I1004 01:36:19.373384  158163 main.go:141] libmachine: Running pre-create checks...
	I1004 01:36:19.373404  158163 main.go:141] libmachine: (offline-crio-398840) Calling .PreCreateCheck
	I1004 01:36:19.373753  158163 main.go:141] libmachine: (offline-crio-398840) Calling .GetConfigRaw
	I1004 01:36:19.374157  158163 main.go:141] libmachine: Creating machine...
	I1004 01:36:19.374173  158163 main.go:141] libmachine: (offline-crio-398840) Calling .Create
	I1004 01:36:19.374313  158163 main.go:141] libmachine: (offline-crio-398840) Creating KVM machine...
	I1004 01:36:19.456469  158163 main.go:141] libmachine: (offline-crio-398840) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/offline-crio-398840 ...
	I1004 01:36:19.456511  158163 main.go:141] libmachine: (offline-crio-398840) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 01:36:19.456531  158163 main.go:141] libmachine: (offline-crio-398840) DBG | ERROR: logging before flag.Parse: I1004 01:36:19.456374  158274 common.go:99] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:36:19.456638  158163 main.go:141] libmachine: (offline-crio-398840) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 01:36:19.717956  158163 main.go:141] libmachine: (offline-crio-398840) DBG | ERROR: logging before flag.Parse: I1004 01:36:19.717791  158274 common.go:106] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/offline-crio-398840/id_rsa...
	I1004 01:36:19.905116  158163 main.go:141] libmachine: (offline-crio-398840) DBG | ERROR: logging before flag.Parse: I1004 01:36:19.904867  158274 common.go:112] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/offline-crio-398840/offline-crio-398840.rawdisk...
	I1004 01:36:19.905162  158163 main.go:141] libmachine: (offline-crio-398840) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/offline-crio-398840 (perms=drwx------)
	I1004 01:36:19.905175  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Writing magic tar header
	I1004 01:36:19.905191  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Writing SSH key tar header
	I1004 01:36:19.905253  158163 main.go:141] libmachine: (offline-crio-398840) DBG | ERROR: logging before flag.Parse: I1004 01:36:19.904999  158274 common.go:126] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/offline-crio-398840 ...
	I1004 01:36:19.905280  158163 main.go:141] libmachine: (offline-crio-398840) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 01:36:19.905401  158163 main.go:141] libmachine: (offline-crio-398840) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 01:36:19.905418  158163 main.go:141] libmachine: (offline-crio-398840) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 01:36:19.905427  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/offline-crio-398840
	I1004 01:36:19.905438  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 01:36:19.905457  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:36:19.905476  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 01:36:19.905490  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 01:36:19.905505  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home/jenkins
	I1004 01:36:19.905521  158163 main.go:141] libmachine: (offline-crio-398840) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 01:36:19.905576  158163 main.go:141] libmachine: (offline-crio-398840) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 01:36:19.905600  158163 main.go:141] libmachine: (offline-crio-398840) Creating domain...
	I1004 01:36:19.905619  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Checking permissions on dir: /home
	I1004 01:36:19.905707  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Skipping /home - not owner
	I1004 01:36:19.910828  158163 main.go:141] libmachine: (offline-crio-398840) Creating network...
	I1004 01:36:19.911932  158163 main.go:141] libmachine: (offline-crio-398840) Ensuring networks are active...
	I1004 01:36:19.912639  158163 main.go:141] libmachine: (offline-crio-398840) Ensuring network default is active
	I1004 01:36:19.912945  158163 main.go:141] libmachine: (offline-crio-398840) Ensuring network mk-offline-crio-398840 is active
	I1004 01:36:19.913392  158163 main.go:141] libmachine: (offline-crio-398840) Getting domain xml...
	I1004 01:36:19.914235  158163 main.go:141] libmachine: (offline-crio-398840) Creating domain...
	I1004 01:36:21.424955  158163 main.go:141] libmachine: (offline-crio-398840) Waiting to get IP...
	I1004 01:36:21.431627  158163 main.go:141] libmachine: (offline-crio-398840) DBG | Waiting for machine to come up 0/40
	I1004 01:36:19.365223  158162 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1004 01:36:19.365258  158162 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1004 01:36:19.365267  158162 cache.go:57] Caching tarball of preloaded images
	I1004 01:36:19.365326  158162 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:36:19.365338  158162 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1004 01:36:19.365687  158162 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kubernetes-upgrade-389799/config.json ...
	I1004 01:36:19.365709  158162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kubernetes-upgrade-389799/config.json: {Name:mk29f7593bcfabc65103ae8e2b911979cc3abb54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:36:19.365850  158162 start.go:365] acquiring machines lock for kubernetes-upgrade-389799: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	* 
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p stopped-upgrade-650200"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 89
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (72.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-720999 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-720999 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.500969172s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-720999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-720999 in cluster pause-720999
	* Updating the running kvm2 "pause-720999" VM ...
	* Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-720999" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:38:41.149099  162563 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:38:41.149377  162563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:38:41.149389  162563 out.go:309] Setting ErrFile to fd 2...
	I1004 01:38:41.149396  162563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:38:41.149677  162563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:38:41.150398  162563 out.go:303] Setting JSON to false
	I1004 01:38:41.151403  162563 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8472,"bootTime":1696375049,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:38:41.151483  162563 start.go:138] virtualization: kvm guest
	I1004 01:38:41.153811  162563 out.go:177] * [pause-720999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:38:41.155285  162563 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:38:41.155369  162563 notify.go:220] Checking for updates...
	I1004 01:38:41.156766  162563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:38:41.158377  162563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:38:41.159760  162563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:38:41.161090  162563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:38:41.162554  162563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:38:41.166494  162563 config.go:182] Loaded profile config "pause-720999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:38:41.167122  162563 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:38:41.167179  162563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:38:41.188858  162563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40631
	I1004 01:38:41.189490  162563 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:38:41.190180  162563 main.go:141] libmachine: Using API Version  1
	I1004 01:38:41.190213  162563 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:38:41.190617  162563 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:38:41.190851  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:38:41.191127  162563 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:38:41.191438  162563 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:38:41.191504  162563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:38:41.208565  162563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36865
	I1004 01:38:41.209094  162563 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:38:41.209616  162563 main.go:141] libmachine: Using API Version  1
	I1004 01:38:41.209643  162563 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:38:41.210058  162563 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:38:41.210268  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:38:41.256764  162563 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:38:41.258238  162563 start.go:298] selected driver: kvm2
	I1004 01:38:41.258255  162563 start.go:902] validating driver "kvm2" against &{Name:pause-720999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.2 ClusterName:pause-720999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:38:41.258417  162563 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:38:41.258863  162563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:38:41.258973  162563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:38:41.274541  162563 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:38:41.275596  162563 cni.go:84] Creating CNI manager for ""
	I1004 01:38:41.275610  162563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:38:41.275618  162563 start_flags.go:321] config:
	{Name:pause-720999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-720999 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:38:41.275806  162563 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:38:41.277942  162563 out.go:177] * Starting control plane node pause-720999 in cluster pause-720999
	I1004 01:38:41.279440  162563 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:38:41.279488  162563 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:38:41.279502  162563 cache.go:57] Caching tarball of preloaded images
	I1004 01:38:41.279606  162563 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:38:41.279620  162563 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:38:41.279765  162563 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/config.json ...
	I1004 01:38:41.280000  162563 start.go:365] acquiring machines lock for pause-720999: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:39:16.359302  162563 start.go:369] acquired machines lock for "pause-720999" in 35.079248748s
	I1004 01:39:16.359360  162563 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:39:16.359368  162563 fix.go:54] fixHost starting: 
	I1004 01:39:16.359866  162563 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:39:16.359933  162563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:39:16.377680  162563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40071
	I1004 01:39:16.378169  162563 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:39:16.378693  162563 main.go:141] libmachine: Using API Version  1
	I1004 01:39:16.378716  162563 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:39:16.379041  162563 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:39:16.379277  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:16.379472  162563 main.go:141] libmachine: (pause-720999) Calling .GetState
	I1004 01:39:16.381755  162563 fix.go:102] recreateIfNeeded on pause-720999: state=Running err=<nil>
	W1004 01:39:16.381791  162563 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:39:16.383838  162563 out.go:177] * Updating the running kvm2 "pause-720999" VM ...
	I1004 01:39:16.385383  162563 machine.go:88] provisioning docker machine ...
	I1004 01:39:16.385415  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:16.385671  162563 main.go:141] libmachine: (pause-720999) Calling .GetMachineName
	I1004 01:39:16.385859  162563 buildroot.go:166] provisioning hostname "pause-720999"
	I1004 01:39:16.385880  162563 main.go:141] libmachine: (pause-720999) Calling .GetMachineName
	I1004 01:39:16.386044  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:16.388824  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.389309  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:16.389374  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.389527  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:16.389739  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.389946  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.390134  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:16.390284  162563 main.go:141] libmachine: Using SSH client type: native
	I1004 01:39:16.390691  162563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I1004 01:39:16.390706  162563 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-720999 && echo "pause-720999" | sudo tee /etc/hostname
	I1004 01:39:16.538894  162563 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-720999
	
	I1004 01:39:16.538941  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:16.542851  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.543307  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:16.543348  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.543519  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:16.543740  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.543948  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.544150  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:16.544354  162563 main.go:141] libmachine: Using SSH client type: native
	I1004 01:39:16.544726  162563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I1004 01:39:16.544750  162563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-720999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-720999/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-720999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:39:16.677465  162563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:39:16.677501  162563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:39:16.677528  162563 buildroot.go:174] setting up certificates
	I1004 01:39:16.677541  162563 provision.go:83] configureAuth start
	I1004 01:39:16.677553  162563 main.go:141] libmachine: (pause-720999) Calling .GetMachineName
	I1004 01:39:16.677943  162563 main.go:141] libmachine: (pause-720999) Calling .GetIP
	I1004 01:39:16.681255  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.681736  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:16.681787  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.682068  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:16.685551  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.686061  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:16.686115  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.686377  162563 provision.go:138] copyHostCerts
	I1004 01:39:16.686460  162563 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:39:16.686473  162563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:39:16.686548  162563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:39:16.686734  162563 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:39:16.686751  162563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:39:16.686790  162563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:39:16.686868  162563 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:39:16.686883  162563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:39:16.686907  162563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:39:16.686970  162563 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.pause-720999 san=[192.168.72.236 192.168.72.236 localhost 127.0.0.1 minikube pause-720999]
	I1004 01:39:16.758485  162563 provision.go:172] copyRemoteCerts
	I1004 01:39:16.758566  162563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:39:16.758600  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:16.761894  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.762365  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:16.762403  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.762631  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:16.762865  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.763022  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:16.763192  162563 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/pause-720999/id_rsa Username:docker}
	I1004 01:39:16.855673  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:39:16.887199  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1004 01:39:16.914311  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 01:39:16.941170  162563 provision.go:86] duration metric: configureAuth took 263.612191ms
	I1004 01:39:16.941206  162563 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:39:16.941495  162563 config.go:182] Loaded profile config "pause-720999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:39:16.941598  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:16.944563  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.945041  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:16.945110  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:16.945312  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:16.945531  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.945779  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:16.945976  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:16.946193  162563 main.go:141] libmachine: Using SSH client type: native
	I1004 01:39:16.946675  162563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I1004 01:39:16.946704  162563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:39:22.758343  162563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:39:22.758375  162563 machine.go:91] provisioned docker machine in 6.372970743s
	I1004 01:39:22.758388  162563 start.go:300] post-start starting for "pause-720999" (driver="kvm2")
	I1004 01:39:22.758401  162563 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:39:22.758422  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:22.758800  162563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:39:22.758834  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:22.762104  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:22.762591  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:22.762621  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:22.762902  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:22.763143  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:22.763305  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:22.763546  162563 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/pause-720999/id_rsa Username:docker}
	I1004 01:39:22.879161  162563 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:39:22.883811  162563 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:39:22.883841  162563 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:39:22.883926  162563 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:39:22.884034  162563 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:39:22.884156  162563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:39:22.897079  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:39:22.927580  162563 start.go:303] post-start completed in 169.173087ms
	I1004 01:39:22.927615  162563 fix.go:56] fixHost completed within 6.56824661s
	I1004 01:39:22.927643  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:22.930877  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:22.931397  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:22.931438  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:22.931621  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:22.931831  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:22.932032  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:22.932206  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:22.932392  162563 main.go:141] libmachine: Using SSH client type: native
	I1004 01:39:22.932868  162563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.236 22 <nil> <nil>}
	I1004 01:39:22.932887  162563 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1004 01:39:23.059283  162563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696383563.055760123
	
	I1004 01:39:23.059316  162563 fix.go:206] guest clock: 1696383563.055760123
	I1004 01:39:23.059324  162563 fix.go:219] Guest: 2023-10-04 01:39:23.055760123 +0000 UTC Remote: 2023-10-04 01:39:22.927619617 +0000 UTC m=+41.816827920 (delta=128.140506ms)
	I1004 01:39:23.059343  162563 fix.go:190] guest clock delta is within tolerance: 128.140506ms
	I1004 01:39:23.059349  162563 start.go:83] releasing machines lock for "pause-720999", held for 6.700013872s
	I1004 01:39:23.059380  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:23.059700  162563 main.go:141] libmachine: (pause-720999) Calling .GetIP
	I1004 01:39:23.062783  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:23.063315  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:23.063352  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:23.063637  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:23.064332  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:23.064591  162563 main.go:141] libmachine: (pause-720999) Calling .DriverName
	I1004 01:39:23.064709  162563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:39:23.064755  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:23.065032  162563 ssh_runner.go:195] Run: cat /version.json
	I1004 01:39:23.065063  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHHostname
	I1004 01:39:23.068162  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:23.068183  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:23.068408  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:23.068444  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:23.068597  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:23.068620  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:23.068724  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:23.068796  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHPort
	I1004 01:39:23.068919  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:23.068954  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHKeyPath
	I1004 01:39:23.069045  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:23.069091  162563 main.go:141] libmachine: (pause-720999) Calling .GetSSHUsername
	I1004 01:39:23.069271  162563 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/pause-720999/id_rsa Username:docker}
	I1004 01:39:23.069271  162563 sshutil.go:53] new ssh client: &{IP:192.168.72.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/pause-720999/id_rsa Username:docker}
	I1004 01:39:23.165015  162563 ssh_runner.go:195] Run: systemctl --version
	I1004 01:39:23.208455  162563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:39:23.368582  162563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 01:39:23.376886  162563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:39:23.376950  162563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:39:23.388277  162563 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1004 01:39:23.388308  162563 start.go:469] detecting cgroup driver to use...
	I1004 01:39:23.388378  162563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:39:23.407997  162563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:39:23.422691  162563 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:39:23.422756  162563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:39:23.440722  162563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:39:23.455933  162563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:39:23.646178  162563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:39:23.876146  162563 docker.go:213] disabling docker service ...
	I1004 01:39:23.876225  162563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:39:23.913087  162563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:39:23.931476  162563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:39:24.064478  162563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:39:24.234756  162563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:39:24.252196  162563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:39:24.275912  162563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:39:24.275985  162563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:39:24.288289  162563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:39:24.288361  162563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:39:24.303754  162563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:39:24.316949  162563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:39:24.334854  162563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:39:24.350396  162563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:39:24.362689  162563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:39:24.375372  162563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:39:24.548422  162563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:39:26.542484  162563 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.994014467s)
	I1004 01:39:26.542516  162563 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:39:26.542569  162563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:39:26.549510  162563 start.go:537] Will wait 60s for crictl version
	I1004 01:39:26.549573  162563 ssh_runner.go:195] Run: which crictl
	I1004 01:39:26.554514  162563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:39:26.611994  162563 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:39:26.612091  162563 ssh_runner.go:195] Run: crio --version
	I1004 01:39:26.661207  162563 ssh_runner.go:195] Run: crio --version
	I1004 01:39:26.729065  162563 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:39:26.730479  162563 main.go:141] libmachine: (pause-720999) Calling .GetIP
	I1004 01:39:26.733746  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:26.734175  162563 main.go:141] libmachine: (pause-720999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0a:6e:d1", ip: ""} in network mk-pause-720999: {Iface:virbr1 ExpiryTime:2023-10-04 02:37:50 +0000 UTC Type:0 Mac:52:54:00:0a:6e:d1 Iaid: IPaddr:192.168.72.236 Prefix:24 Hostname:pause-720999 Clientid:01:52:54:00:0a:6e:d1}
	I1004 01:39:26.734209  162563 main.go:141] libmachine: (pause-720999) DBG | domain pause-720999 has defined IP address 192.168.72.236 and MAC address 52:54:00:0a:6e:d1 in network mk-pause-720999
	I1004 01:39:26.734353  162563 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 01:39:26.739126  162563 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:39:26.739186  162563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:39:26.806665  162563 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:39:26.806692  162563 crio.go:415] Images already preloaded, skipping extraction
	I1004 01:39:26.806752  162563 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:39:26.899060  162563 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:39:26.899105  162563 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:39:26.899180  162563 ssh_runner.go:195] Run: crio config
	I1004 01:39:27.151607  162563 cni.go:84] Creating CNI manager for ""
	I1004 01:39:27.151636  162563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:39:27.151664  162563 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:39:27.151690  162563 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.236 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-720999 NodeName:pause-720999 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:39:27.151904  162563 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-720999"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:39:27.151996  162563 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-720999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-720999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 01:39:27.152063  162563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:39:27.183458  162563 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:39:27.183567  162563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:39:27.211418  162563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1004 01:39:27.240282  162563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:39:27.265916  162563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1004 01:39:27.291354  162563 ssh_runner.go:195] Run: grep 192.168.72.236	control-plane.minikube.internal$ /etc/hosts
	I1004 01:39:27.299786  162563 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999 for IP: 192.168.72.236
	I1004 01:39:27.299827  162563 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:39:27.300011  162563 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:39:27.300072  162563 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:39:27.300169  162563 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/client.key
	I1004 01:39:27.300251  162563 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/apiserver.key.72479ac6
	I1004 01:39:27.300303  162563 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/proxy-client.key
	I1004 01:39:27.300443  162563 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:39:27.300485  162563 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:39:27.300500  162563 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:39:27.300538  162563 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:39:27.300573  162563 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:39:27.300623  162563 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:39:27.300680  162563 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:39:27.301410  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:39:27.335224  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:39:27.378759  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:39:27.414276  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/pause-720999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 01:39:27.450335  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:39:27.499420  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:39:27.552375  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:39:27.596647  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:39:27.642256  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:39:27.691244  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:39:27.726198  162563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:39:27.781175  162563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:39:27.832104  162563 ssh_runner.go:195] Run: openssl version
	I1004 01:39:27.849292  162563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:39:27.869168  162563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:39:27.880358  162563 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:39:27.880438  162563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:39:27.892058  162563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:39:27.918273  162563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:39:27.952593  162563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:39:27.960871  162563 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:39:27.960956  162563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:39:27.973183  162563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:39:27.998911  162563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:39:28.034562  162563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:39:28.050178  162563 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:39:28.050269  162563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:39:28.072102  162563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:39:28.088125  162563 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:39:28.095841  162563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:39:28.106361  162563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:39:28.117885  162563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:39:28.129511  162563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:39:28.141693  162563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:39:28.153181  162563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:39:28.164250  162563 kubeadm.go:404] StartCluster: {Name:pause-720999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.2 ClusterName:pause-720999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.236 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:39:28.164492  162563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:39:28.164606  162563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:39:28.252126  162563 cri.go:89] found id: "f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b"
	I1004 01:39:28.252216  162563 cri.go:89] found id: "087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a"
	I1004 01:39:28.252248  162563 cri.go:89] found id: "b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc"
	I1004 01:39:28.252264  162563 cri.go:89] found id: "213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a"
	I1004 01:39:28.252278  162563 cri.go:89] found id: "a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a"
	I1004 01:39:28.252292  162563 cri.go:89] found id: "1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f"
	I1004 01:39:28.252338  162563 cri.go:89] found id: "d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1"
	I1004 01:39:28.252354  162563 cri.go:89] found id: ""
	I1004 01:39:28.252439  162563 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-720999 -n pause-720999
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-720999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-720999 logs -n 25: (1.668209541s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo find                           | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo crio                           | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-171116                                     | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:37 UTC |
	| start   | -p cert-expiration-528457                            | cert-expiration-528457   | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:39 UTC |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                          |         |         |                     |                     |
	|         | --driver=kvm2                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-874915                          | force-systemd-env-874915 | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:37 UTC |
	| start   | -p cert-options-703971                               | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:39 UTC |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                          |         |         |                     |                     |
	|         | --driver=kvm2                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| start   | -p pause-720999                                      | pause-720999             | jenkins | v1.31.2 | 04 Oct 23 01:38 UTC | 04 Oct 23 01:39 UTC |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-294276                               | NoKubernetes-294276      | jenkins | v1.31.2 | 04 Oct 23 01:38 UTC | 04 Oct 23 01:39 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| delete  | -p NoKubernetes-294276                               | NoKubernetes-294276      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	| start   | -p NoKubernetes-294276                               | NoKubernetes-294276      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| ssh     | cert-options-703971 ssh                              | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	|         | openssl x509 -text -noout -in                        |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                          |         |         |                     |                     |
	| ssh     | -p cert-options-703971 -- sudo                       | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                          |         |         |                     |                     |
	| delete  | -p cert-options-703971                               | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	| start   | -p old-k8s-version-107182                            | old-k8s-version-107182   | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC |                     |
	|         | --memory=2200                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |         |         |                     |                     |
	|         | --kvm-network=default                                |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |         |         |                     |                     |
	|         | --keep-context=false                                 |                          |         |         |                     |                     |
	|         | --driver=kvm2                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:39:42
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:39:42.147417  163521 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:39:42.147560  163521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:39:42.147570  163521 out.go:309] Setting ErrFile to fd 2...
	I1004 01:39:42.147575  163521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:39:42.147783  163521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:39:42.148414  163521 out.go:303] Setting JSON to false
	I1004 01:39:42.149413  163521 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8533,"bootTime":1696375049,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:39:42.149479  163521 start.go:138] virtualization: kvm guest
	I1004 01:39:42.151784  163521 out.go:177] * [old-k8s-version-107182] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:39:42.153634  163521 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:39:42.155072  163521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:39:42.153639  163521 notify.go:220] Checking for updates...
	I1004 01:39:42.157807  163521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:39:42.159129  163521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:39:42.160460  163521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:39:42.161768  163521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:39:42.163843  163521 config.go:182] Loaded profile config "NoKubernetes-294276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1004 01:39:42.163999  163521 config.go:182] Loaded profile config "cert-expiration-528457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:39:42.164193  163521 config.go:182] Loaded profile config "pause-720999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:39:42.164305  163521 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:39:42.207380  163521 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:39:42.208772  163521 start.go:298] selected driver: kvm2
	I1004 01:39:42.208794  163521 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:39:42.208810  163521 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:39:42.209795  163521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:39:42.209950  163521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:39:42.227581  163521 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:39:42.227642  163521 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 01:39:42.227900  163521 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:39:42.227936  163521 cni.go:84] Creating CNI manager for ""
	I1004 01:39:42.227946  163521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:39:42.227953  163521 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 01:39:42.227959  163521 start_flags.go:321] config:
	{Name:old-k8s-version-107182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-107182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:39:42.228084  163521 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:39:42.230094  163521 out.go:177] * Starting control plane node old-k8s-version-107182 in cluster old-k8s-version-107182
	I1004 01:39:41.946678  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:41.947152  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:41.947204  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:41.947138  163224 retry.go:31] will retry after 1.107492832s: waiting for machine to come up
	I1004 01:39:43.056438  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:43.056930  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:43.056953  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:43.056870  163224 retry.go:31] will retry after 1.622614723s: waiting for machine to come up
	I1004 01:39:44.681614  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:44.682126  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:44.682149  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:44.682067  163224 retry.go:31] will retry after 1.425874865s: waiting for machine to come up
	I1004 01:39:41.941926  162563 pod_ready.go:102] pod "etcd-pause-720999" in "kube-system" namespace has status "Ready":"False"
	I1004 01:39:44.442856  162563 pod_ready.go:102] pod "etcd-pause-720999" in "kube-system" namespace has status "Ready":"False"
	I1004 01:39:45.942904  162563 pod_ready.go:92] pod "etcd-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.942931  162563 pod_ready.go:81] duration metric: took 8.525578319s waiting for pod "etcd-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.942944  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.949165  162563 pod_ready.go:92] pod "kube-apiserver-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.949191  162563 pod_ready.go:81] duration metric: took 6.240405ms waiting for pod "kube-apiserver-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.949200  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.955779  162563 pod_ready.go:92] pod "kube-controller-manager-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.955800  162563 pod_ready.go:81] duration metric: took 6.594168ms waiting for pod "kube-controller-manager-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.955809  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhbwh" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.963120  162563 pod_ready.go:92] pod "kube-proxy-vhbwh" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.963149  162563 pod_ready.go:81] duration metric: took 7.333086ms waiting for pod "kube-proxy-vhbwh" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.963163  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:42.231483  163521 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1004 01:39:42.231524  163521 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1004 01:39:42.231531  163521 cache.go:57] Caching tarball of preloaded images
	I1004 01:39:42.231603  163521 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:39:42.231615  163521 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1004 01:39:42.231699  163521 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/config.json ...
	I1004 01:39:42.231719  163521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/config.json: {Name:mk9afb4f5618c97beab4c223fd87cbb61bc88a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:39:42.231888  163521 start.go:365] acquiring machines lock for old-k8s-version-107182: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:39:48.346572  162563 pod_ready.go:102] pod "kube-scheduler-pause-720999" in "kube-system" namespace has status "Ready":"False"
	I1004 01:39:49.347131  162563 pod_ready.go:92] pod "kube-scheduler-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:49.347162  162563 pod_ready.go:81] duration metric: took 3.38399079s waiting for pod "kube-scheduler-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:49.347171  162563 pod_ready.go:38] duration metric: took 11.95714084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:39:49.347187  162563 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:39:49.347234  162563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:39:49.361245  162563 api_server.go:72] duration metric: took 12.093723408s to wait for apiserver process to appear ...
	I1004 01:39:49.361273  162563 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:39:49.361289  162563 api_server.go:253] Checking apiserver healthz at https://192.168.72.236:8443/healthz ...
	I1004 01:39:49.367103  162563 api_server.go:279] https://192.168.72.236:8443/healthz returned 200:
	ok
	I1004 01:39:49.368414  162563 api_server.go:141] control plane version: v1.28.2
	I1004 01:39:49.368435  162563 api_server.go:131] duration metric: took 7.15519ms to wait for apiserver health ...
	I1004 01:39:49.368445  162563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:39:49.375189  162563 system_pods.go:59] 7 kube-system pods found
	I1004 01:39:49.375224  162563 system_pods.go:61] "coredns-5dd5756b68-56mr8" [dc92e1e3-f685-4343-b8ed-8c37efb906c6] Running
	I1004 01:39:49.375233  162563 system_pods.go:61] "coredns-5dd5756b68-mn7rg" [d030c38e-8704-480a-96d3-fa78c83de8a7] Running
	I1004 01:39:49.375241  162563 system_pods.go:61] "etcd-pause-720999" [16560d3e-b7a0-4154-9e1d-238546be759d] Running
	I1004 01:39:49.375248  162563 system_pods.go:61] "kube-apiserver-pause-720999" [f606e554-c9dc-4537-809e-e18e7956fbea] Running
	I1004 01:39:49.375260  162563 system_pods.go:61] "kube-controller-manager-pause-720999" [36eb35dc-a377-4637-81f0-4b8f518c94db] Running
	I1004 01:39:49.375272  162563 system_pods.go:61] "kube-proxy-vhbwh" [d1f9d83f-6d5e-40bb-8504-ff7867bea039] Running
	I1004 01:39:49.375280  162563 system_pods.go:61] "kube-scheduler-pause-720999" [39970d28-e025-4d86-bb8e-2dca1fbe6009] Running
	I1004 01:39:49.375293  162563 system_pods.go:74] duration metric: took 6.840624ms to wait for pod list to return data ...
	I1004 01:39:49.375306  162563 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:39:49.378065  162563 default_sa.go:45] found service account: "default"
	I1004 01:39:49.378086  162563 default_sa.go:55] duration metric: took 2.768877ms for default service account to be created ...
	I1004 01:39:49.378093  162563 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:39:49.384461  162563 system_pods.go:86] 7 kube-system pods found
	I1004 01:39:49.384499  162563 system_pods.go:89] "coredns-5dd5756b68-56mr8" [dc92e1e3-f685-4343-b8ed-8c37efb906c6] Running
	I1004 01:39:49.384509  162563 system_pods.go:89] "coredns-5dd5756b68-mn7rg" [d030c38e-8704-480a-96d3-fa78c83de8a7] Running
	I1004 01:39:49.384517  162563 system_pods.go:89] "etcd-pause-720999" [16560d3e-b7a0-4154-9e1d-238546be759d] Running
	I1004 01:39:49.384524  162563 system_pods.go:89] "kube-apiserver-pause-720999" [f606e554-c9dc-4537-809e-e18e7956fbea] Running
	I1004 01:39:49.384533  162563 system_pods.go:89] "kube-controller-manager-pause-720999" [36eb35dc-a377-4637-81f0-4b8f518c94db] Running
	I1004 01:39:49.384544  162563 system_pods.go:89] "kube-proxy-vhbwh" [d1f9d83f-6d5e-40bb-8504-ff7867bea039] Running
	I1004 01:39:49.384565  162563 system_pods.go:89] "kube-scheduler-pause-720999" [39970d28-e025-4d86-bb8e-2dca1fbe6009] Running
	I1004 01:39:49.384577  162563 system_pods.go:126] duration metric: took 6.476221ms to wait for k8s-apps to be running ...
	I1004 01:39:49.384592  162563 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:39:49.384661  162563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:39:49.398510  162563 system_svc.go:56] duration metric: took 13.910652ms WaitForService to wait for kubelet.
	I1004 01:39:49.398532  162563 kubeadm.go:581] duration metric: took 12.131018717s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:39:49.398556  162563 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:39:49.537200  162563 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:39:49.537234  162563 node_conditions.go:123] node cpu capacity is 2
	I1004 01:39:49.537245  162563 node_conditions.go:105] duration metric: took 138.684321ms to run NodePressure ...
	I1004 01:39:49.537257  162563 start.go:228] waiting for startup goroutines ...
	I1004 01:39:49.537262  162563 start.go:233] waiting for cluster config update ...
	I1004 01:39:49.537268  162563 start.go:242] writing updated cluster config ...
	I1004 01:39:49.537607  162563 ssh_runner.go:195] Run: rm -f paused
	I1004 01:39:49.589413  162563 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:39:49.591670  162563 out.go:177] * Done! kubectl is now configured to use "pause-720999" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:37:47 UTC, ends at Wed 2023-10-04 01:39:50 UTC. --
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389374385Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389414918Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389555921Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389610491Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389680267Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389725659Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389772435Z" level=debug msg="parsed reference into \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc\"" file="storage/storage_transport.go:185"
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.389905660Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,RepoTags:[registry.k8s.io/kube-apiserver:v1.28.2],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631 registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c],Size_:127149008,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,RepoTags:[registry.k8s.io/kube-controller-manager:v1.28.2],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4 registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e],Size_:123171638,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:7a5d9d67a13f6ae03198
9bc2969ec55b06437725f397e6eb75b1dccac465a7b8,RepoTags:[registry.k8s.io/kube-scheduler:v1.28.2],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543],Size_:61485878,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,RepoTags:[registry.k8s.io/kube-proxy:v1.28.2],RepoDigests:[registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf],Size_:74687895,Uid:nil,Username:,Spec:nil,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34
c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,},&Image{Id:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,RepoTags:[registry.k8s.io/etcd:3.5.9-0],RepoDigests:[registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15 registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3],Size_:295456551,Uid:&Int64Value{Value:0,},Username:,Spec:nil,},&Image{Id:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,RepoTags:[registry.k8s.io/coredns/coredns:v1.10.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378],Size_:53621675,Uid:nil,Username:,Spec:nil,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.
io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},&Image{Id:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,RepoTags:[docker.io/kindest/kindnetd:v20230809-80a64d96],RepoDigests:[docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052 docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4],Size_:65258016,Uid:nil,Username:,Spec:nil,},},}" file="go-grpc-middleware/chain.go:25" id=a2a6f20a-e7c9-4683-90bc-2cfdc2a6f002 name=/runtime.v1.ImageService/ListImages
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.390180337Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d17f6840-b13b-4509-85aa-ed9f275790cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.390219785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d17f6840-b13b-4509-85aa-ed9f275790cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.390526640Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d17f6840-b13b-4509-85aa-ed9f275790cf name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.450114997Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eecf43aa-a718-4d6f-9a41-0dbd763f2843 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.450205371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eecf43aa-a718-4d6f-9a41-0dbd763f2843 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.452395063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=da6dd15e-cd97-41f8-8cd4-7729746710ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.453068879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383590453047217,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=da6dd15e-cd97-41f8-8cd4-7729746710ef name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.454237042Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef39e3cb-1a6c-4597-bd6a-f8a902024207 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.454378964Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef39e3cb-1a6c-4597-bd6a-f8a902024207 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.454937302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef39e3cb-1a6c-4597-bd6a-f8a902024207 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.501691825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d0256423-45c0-4479-b664-2cb2e514dccb name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.501809444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d0256423-45c0-4479-b664-2cb2e514dccb name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.503444284Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=92f40e48-c7ee-4a42-8657-a9099c2dfaac name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.504639124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383590504617788,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=92f40e48-c7ee-4a42-8657-a9099c2dfaac name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.507784787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=73981056-f9f2-40aa-b654-f5f1352dc262 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.507903825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=73981056-f9f2-40aa-b654-f5f1352dc262 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:50 pause-720999 crio[2335]: time="2023-10-04 01:39:50.508324150Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=73981056-f9f2-40aa-b654-f5f1352dc262 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d3d7cbb80e8c6       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   10 seconds ago       Running             kube-proxy                1                   eda9d3da24a68       kube-proxy-vhbwh
	fb457dc442ba5       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   18 seconds ago       Running             kube-apiserver            1                   e79601488ecc6       kube-apiserver-pause-720999
	7849aa199d9a7       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   20 seconds ago       Running             kube-scheduler            1                   2a342270a1d1a       kube-scheduler-pause-720999
	48bd1feb06d79       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago       Running             coredns                   1                   582c5a105962b       coredns-5dd5756b68-56mr8
	598e6482466b3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago       Running             coredns                   1                   1fba14a54cab5       coredns-5dd5756b68-mn7rg
	3451fa7631180       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   22 seconds ago       Running             etcd                      1                   bc8f18e7c407a       etcd-pause-720999
	fd606a2e48532       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   22 seconds ago       Running             kube-controller-manager   1                   e5972b4783d0c       kube-controller-manager-pause-720999
	f4d88e5fa1102       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   About a minute ago   Exited              kube-proxy                0                   89bfb8f323bb5       kube-proxy-vhbwh
	087ed9e1b1481       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   fe3cee9b473db       coredns-5dd5756b68-56mr8
	b036b1ad5d034       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   30e74496263eb       coredns-5dd5756b68-mn7rg
	213bf944bc856       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   acd863299380f       etcd-pause-720999
	a199e1ab47e3d       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   About a minute ago   Exited              kube-scheduler            0                   48b966aa64d18       kube-scheduler-pause-720999
	1aac370f99f26       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   About a minute ago   Exited              kube-apiserver            0                   6e3c198c357c6       kube-apiserver-pause-720999
	d21ddb214823f       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   About a minute ago   Exited              kube-controller-manager   0                   f20a28829c957       kube-controller-manager-pause-720999
	
	* 
	* ==> coredns [087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53302 - 31300 "HINFO IN 6050563274851998706.4123623980529176257. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.058679919s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57867 - 37362 "HINFO IN 1265801200153013347.8888941506576543796. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024790318s
	
	* 
	* ==> coredns [598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43372 - 33168 "HINFO IN 5110613864747501214.5442164686957156669. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02058181s
	
	* 
	* ==> coredns [b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-720999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-720999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=pause-720999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_38_20_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:38:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-720999
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:39:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.236
	  Hostname:    pause-720999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 5022fa1f078044a78d269407b53865a5
	  System UUID:                5022fa1f-0780-44a7-8d26-9407b53865a5
	  Boot ID:                    66d8928c-67ba-4828-ad25-5e765bd11d46
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-56mr8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                 coredns-5dd5756b68-mn7rg                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                 etcd-pause-720999                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         90s
	  kube-system                 kube-apiserver-pause-720999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-controller-manager-pause-720999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-proxy-vhbwh                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-scheduler-pause-720999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 10s                kube-proxy       
	  Normal   Starting                 99s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node pause-720999 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node pause-720999 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node pause-720999 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  90s                kubelet          Node pause-720999 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s                kubelet          Node pause-720999 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     90s                kubelet          Node pause-720999 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                90s                kubelet          Node pause-720999 status is now: NodeReady
	  Normal   Starting                 90s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           77s                node-controller  Node pause-720999 event: Registered Node pause-720999 in Controller
	  Warning  ContainerGCFailed        30s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           3s                 node-controller  Node pause-720999 event: Registered Node pause-720999 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074775] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.574961] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.388278] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141929] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.110435] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.334693] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.127393] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.153129] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[Oct 4 01:38] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.226334] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.708475] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +9.282396] systemd-fstab-generator[1255]: Ignoring "noauto" for root device
	[Oct 4 01:39] systemd-fstab-generator[2178]: Ignoring "noauto" for root device
	[  +0.157925] systemd-fstab-generator[2189]: Ignoring "noauto" for root device
	[  +0.103659] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.184632] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.130318] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[  +0.335374] systemd-fstab-generator[2287]: Ignoring "noauto" for root device
	[ +15.981829] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a] <==
	* {"level":"warn","ts":"2023-10-04T01:38:34.142975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.316702Z","time spent":"826.267363ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":233,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.142223Z","caller":"traceutil/trace.go:171","msg":"trace[817942963] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:325; }","duration":"833.467031ms","start":"2023-10-04T01:38:33.308752Z","end":"2023-10-04T01:38:34.142219Z","steps":["trace[817942963] 'agreement among raft nodes before linearized reading'  (duration: 830.760391ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.308741Z","time spent":"834.385121ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":231,"request content":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.142244Z","caller":"traceutil/trace.go:171","msg":"trace[852715553] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:325; }","duration":"842.519209ms","start":"2023-10-04T01:38:33.299711Z","end":"2023-10-04T01:38:34.142231Z","steps":["trace[852715553] 'agreement among raft nodes before linearized reading'  (duration: 839.827397ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.299699Z","time spent":"843.575867ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":221,"request content":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.142309Z","caller":"traceutil/trace.go:171","msg":"trace[125518939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:325; }","duration":"853.579321ms","start":"2023-10-04T01:38:33.288726Z","end":"2023-10-04T01:38:34.142305Z","steps":["trace[125518939] 'agreement among raft nodes before linearized reading'  (duration: 849.682034ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143421Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.288711Z","time spent":"854.703273ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":242,"request content":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.141866Z","caller":"traceutil/trace.go:171","msg":"trace[1563608939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:325; }","duration":"800.76143ms","start":"2023-10-04T01:38:33.341099Z","end":"2023-10-04T01:38:34.14186Z","steps":["trace[1563608939] 'agreement among raft nodes before linearized reading'  (duration: 798.273287ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143651Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.341092Z","time spent":"802.551376ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":258,"request content":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:35.621101Z","caller":"traceutil/trace.go:171","msg":"trace[1772072781] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"168.762795ms","start":"2023-10-04T01:38:35.452325Z","end":"2023-10-04T01:38:35.621088Z","steps":["trace[1772072781] 'process raft request'  (duration: 168.625071ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:38:35.676193Z","caller":"traceutil/trace.go:171","msg":"trace[1078971930] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"165.379647ms","start":"2023-10-04T01:38:35.510794Z","end":"2023-10-04T01:38:35.676174Z","steps":["trace[1078971930] 'process raft request'  (duration: 165.182894ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:38:35.863659Z","caller":"traceutil/trace.go:171","msg":"trace[989546584] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"182.40217ms","start":"2023-10-04T01:38:35.681235Z","end":"2023-10-04T01:38:35.863638Z","steps":["trace[989546584] 'process raft request'  (duration: 120.106293ms)","trace[989546584] 'compare'  (duration: 61.990615ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T01:38:58.401095Z","caller":"traceutil/trace.go:171","msg":"trace[782125229] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"362.953778ms","start":"2023-10-04T01:38:58.03808Z","end":"2023-10-04T01:38:58.401033Z","steps":["trace[782125229] 'process raft request'  (duration: 362.573918ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:58.402163Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:58.038058Z","time spent":"364.007066ms","remote":"127.0.0.1:40256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-jbfvhtrazaplv5f4fazfhqxxfm\" mod_revision:407 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-jbfvhtrazaplv5f4fazfhqxxfm\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-jbfvhtrazaplv5f4fazfhqxxfm\" > >"}
	{"level":"warn","ts":"2023-10-04T01:38:58.567684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.757523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2166384219963067625 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1e108af855e90ce8>","response":"size:41"}
	{"level":"info","ts":"2023-10-04T01:39:17.112668Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-04T01:39:17.112798Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-720999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.236:2380"],"advertise-client-urls":["https://192.168.72.236:2379"]}
	{"level":"warn","ts":"2023-10-04T01:39:17.112909Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T01:39:17.113023Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T01:39:17.27118Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.236:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T01:39:17.271296Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.236:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-04T01:39:17.271393Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8ec6fb919e971e10","current-leader-member-id":"8ec6fb919e971e10"}
	{"level":"info","ts":"2023-10-04T01:39:17.275228Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:17.275377Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:17.27541Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-720999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.236:2380"],"advertise-client-urls":["https://192.168.72.236:2379"]}
	
	* 
	* ==> etcd [3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99] <==
	* {"level":"info","ts":"2023-10-04T01:39:30.212961Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T01:39:30.213003Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T01:39:30.213313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 switched to configuration voters=(10288187001624010256)"}
	{"level":"info","ts":"2023-10-04T01:39:30.213429Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b1c7f4697fbaf59f","local-member-id":"8ec6fb919e971e10","added-peer-id":"8ec6fb919e971e10","added-peer-peer-urls":["https://192.168.72.236:2380"]}
	{"level":"info","ts":"2023-10-04T01:39:30.21376Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b1c7f4697fbaf59f","local-member-id":"8ec6fb919e971e10","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:39:30.213832Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:39:30.256761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-04T01:39:30.257325Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:30.257647Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:30.26093Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8ec6fb919e971e10","initial-advertise-peer-urls":["https://192.168.72.236:2380"],"listen-peer-urls":["https://192.168.72.236:2380"],"advertise-client-urls":["https://192.168.72.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:39:30.261089Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:39:31.118081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T01:39:31.118235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:39:31.118322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 received MsgPreVoteResp from 8ec6fb919e971e10 at term 2"}
	{"level":"info","ts":"2023-10-04T01:39:31.118379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.118488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 received MsgVoteResp from 8ec6fb919e971e10 at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.121654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 became leader at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.121722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8ec6fb919e971e10 elected leader 8ec6fb919e971e10 at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.124921Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8ec6fb919e971e10","local-member-attributes":"{Name:pause-720999 ClientURLs:[https://192.168.72.236:2379]}","request-path":"/0/members/8ec6fb919e971e10/attributes","cluster-id":"b1c7f4697fbaf59f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:39:31.125016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:39:31.127328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:39:31.129022Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:39:31.130718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.236:2379"}
	{"level":"info","ts":"2023-10-04T01:39:31.145598Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:39:31.145708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  01:39:51 up 2 min,  0 users,  load average: 1.65, 0.61, 0.22
	Linux pause-720999 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f] <==
	* I1004 01:38:34.235164       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1004 01:39:17.112789       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E1004 01:39:17.136744       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E1004 01:39:17.136912       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I1004 01:39:17.137095       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 01:39:17.137299       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I1004 01:39:17.138011       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I1004 01:39:17.139778       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1004 01:39:17.139846       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I1004 01:39:17.142141       1 controller.go:129] Ending legacy_token_tracking_controller
	I1004 01:39:17.142191       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I1004 01:39:17.142250       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I1004 01:39:17.142294       1 available_controller.go:439] Shutting down AvailableConditionController
	I1004 01:39:17.142326       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I1004 01:39:17.142364       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I1004 01:39:17.142409       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1004 01:39:17.142447       1 autoregister_controller.go:165] Shutting down autoregister controller
	I1004 01:39:17.142982       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I1004 01:39:17.143034       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1004 01:39:17.149685       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I1004 01:39:17.149762       1 establishing_controller.go:87] Shutting down EstablishingController
	I1004 01:39:17.149808       1 naming_controller.go:302] Shutting down NamingConditionController
	I1004 01:39:17.149855       1 controller.go:115] Shutting down OpenAPI V3 controller
	I1004 01:39:17.149906       1 controller.go:162] Shutting down OpenAPI controller
	I1004 01:39:17.149956       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	
	* 
	* ==> kube-apiserver [fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f] <==
	* I1004 01:39:35.132574       1 controller.go:78] Starting OpenAPI AggregationController
	I1004 01:39:35.135382       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1004 01:39:35.135395       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1004 01:39:35.206572       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 01:39:35.207603       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 01:39:35.324053       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1004 01:39:35.324136       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1004 01:39:35.324620       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 01:39:35.329857       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 01:39:35.331580       1 shared_informer.go:318] Caches are synced for configmaps
	I1004 01:39:35.331656       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1004 01:39:35.333157       1 aggregator.go:166] initial CRD sync complete...
	I1004 01:39:35.333233       1 autoregister_controller.go:141] Starting autoregister controller
	I1004 01:39:35.333264       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 01:39:35.333296       1 cache.go:39] Caches are synced for autoregister controller
	E1004 01:39:35.334322       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1004 01:39:35.335875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1004 01:39:35.350548       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 01:39:35.399832       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1004 01:39:35.424035       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 01:39:36.140045       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1004 01:39:45.325000       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1004 01:39:47.203732       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1004 01:39:47.204086       1 controller.go:624] quota admission added evaluator for: endpoints
	I1004 01:39:47.243975       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1] <==
	* I1004 01:38:33.382695       1 shared_informer.go:318] Caches are synced for HPA
	I1004 01:38:33.404227       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1004 01:38:33.440051       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:38:33.448242       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:38:33.483553       1 shared_informer.go:318] Caches are synced for endpoint
	I1004 01:38:33.483603       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1004 01:38:33.535070       1 shared_informer.go:318] Caches are synced for attach detach
	I1004 01:38:33.882786       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:38:33.882818       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1004 01:38:33.911659       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:38:34.178955       1 range_allocator.go:380] "Set node PodCIDR" node="pause-720999" podCIDRs=["10.244.0.0/24"]
	I1004 01:38:34.258861       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1004 01:38:34.298559       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vhbwh"
	I1004 01:38:34.346778       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mn7rg"
	I1004 01:38:34.414855       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-56mr8"
	I1004 01:38:34.452171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="194.162472ms"
	I1004 01:38:34.522672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.337218ms"
	I1004 01:38:34.522977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.218µs"
	I1004 01:38:34.540808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.214µs"
	I1004 01:38:37.573896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="368.029µs"
	I1004 01:38:38.580377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.943µs"
	I1004 01:38:38.650258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.123293ms"
	I1004 01:38:38.653724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.747µs"
	I1004 01:38:38.721822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.01581ms"
	I1004 01:38:38.722015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.764µs"
	
	* 
	* ==> kube-controller-manager [fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e] <==
	* I1004 01:39:47.218547       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:39:47.220590       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:39:47.221109       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1004 01:39:47.226137       1 shared_informer.go:318] Caches are synced for job
	I1004 01:39:47.229148       1 shared_informer.go:318] Caches are synced for HPA
	I1004 01:39:47.231711       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1004 01:39:47.232063       1 shared_informer.go:318] Caches are synced for stateful set
	I1004 01:39:47.235732       1 shared_informer.go:318] Caches are synced for cronjob
	I1004 01:39:47.238542       1 shared_informer.go:318] Caches are synced for PVC protection
	I1004 01:39:47.238840       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1004 01:39:47.238707       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1004 01:39:47.242779       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1004 01:39:47.242939       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1004 01:39:47.249974       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1004 01:39:47.254400       1 shared_informer.go:318] Caches are synced for ephemeral
	I1004 01:39:47.257922       1 shared_informer.go:318] Caches are synced for daemon sets
	I1004 01:39:47.261927       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-mn7rg"
	I1004 01:39:47.270074       1 shared_informer.go:318] Caches are synced for persistent volume
	I1004 01:39:47.296516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.438454ms"
	I1004 01:39:47.308758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.460094ms"
	I1004 01:39:47.310044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.29µs"
	I1004 01:39:47.382324       1 shared_informer.go:318] Caches are synced for attach detach
	I1004 01:39:47.776214       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:39:47.777549       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:39:47.777619       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429] <==
	* I1004 01:39:40.147526       1 server_others.go:69] "Using iptables proxy"
	I1004 01:39:40.163235       1 node.go:141] Successfully retrieved node IP: 192.168.72.236
	I1004 01:39:40.222721       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:39:40.222787       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:39:40.226820       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:39:40.227081       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:39:40.227678       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:39:40.228037       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:39:40.230081       1 config.go:188] "Starting service config controller"
	I1004 01:39:40.230244       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:39:40.230611       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:39:40.230809       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:39:40.233857       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:39:40.230965       1 config.go:315] "Starting node config controller"
	I1004 01:39:40.234209       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:39:40.330867       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:39:40.335064       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b] <==
	* I1004 01:38:37.819007       1 server_others.go:69] "Using iptables proxy"
	I1004 01:38:37.844110       1 node.go:141] Successfully retrieved node IP: 192.168.72.236
	I1004 01:38:37.906672       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:38:37.906747       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:38:37.911104       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:38:37.911276       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:38:37.912024       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:38:37.912080       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:38:37.914093       1 config.go:188] "Starting service config controller"
	I1004 01:38:37.914606       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:38:37.914695       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:38:37.914723       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:38:37.916293       1 config.go:315] "Starting node config controller"
	I1004 01:38:37.916342       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:38:38.014980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:38:38.015201       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:38:38.017584       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7] <==
	* I1004 01:39:32.271537       1 serving.go:348] Generated self-signed cert in-memory
	W1004 01:39:35.256758       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 01:39:35.256865       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:39:35.256897       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 01:39:35.256925       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 01:39:35.346979       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1004 01:39:35.347132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:39:35.366260       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 01:39:35.366363       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:39:35.370796       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 01:39:35.370897       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 01:39:35.466695       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a] <==
	* W1004 01:38:16.713858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:38:16.713867       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 01:38:16.713920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:38:16.713930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 01:38:17.540834       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 01:38:17.540961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 01:38:17.551088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:38:17.551161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 01:38:17.559596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:38:17.559689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 01:38:17.658790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:38:17.658887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 01:38:17.700428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 01:38:17.700507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 01:38:17.710800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 01:38:17.710884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 01:38:17.956979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:38:17.957039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 01:38:18.174552       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:38:18.174716       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1004 01:38:20.100796       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:39:17.129097       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1004 01:39:17.129394       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1004 01:39:17.130288       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1004 01:39:17.132222       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:37:47 UTC, ends at Wed 2023-10-04 01:39:51 UTC. --
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.386173    1262 status_manager.go:853] "Failed to get status for pod" podUID="98d53559b7ed33dbf1c34e63ac43bc9b" pod="kube-system/kube-apiserver-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.387155    1262 status_manager.go:853] "Failed to get status for pod" podUID="5c3b1831477b47ac77b4cd29bf5cc7f1" pod="kube-system/kube-controller-manager-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.387823    1262 status_manager.go:853] "Failed to get status for pod" podUID="d1f9d83f-6d5e-40bb-8504-ff7867bea039" pod="kube-system/kube-proxy-vhbwh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vhbwh\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.388153    1262 status_manager.go:853] "Failed to get status for pod" podUID="d030c38e-8704-480a-96d3-fa78c83de8a7" pod="kube-system/coredns-5dd5756b68-mn7rg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mn7rg\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.388415    1262 status_manager.go:853] "Failed to get status for pod" podUID="dc92e1e3-f685-4343-b8ed-8c37efb906c6" pod="kube-system/coredns-5dd5756b68-56mr8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-56mr8\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.887260    1262 status_manager.go:853] "Failed to get status for pod" podUID="410133f71508ed4d79e0c35165939440" pod="kube-system/etcd-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.887838    1262 status_manager.go:853] "Failed to get status for pod" podUID="98d53559b7ed33dbf1c34e63ac43bc9b" pod="kube-system/kube-apiserver-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.888092    1262 status_manager.go:853] "Failed to get status for pod" podUID="5c3b1831477b47ac77b4cd29bf5cc7f1" pod="kube-system/kube-controller-manager-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.888392    1262 status_manager.go:853] "Failed to get status for pod" podUID="d1f9d83f-6d5e-40bb-8504-ff7867bea039" pod="kube-system/kube-proxy-vhbwh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vhbwh\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.888741    1262 status_manager.go:853] "Failed to get status for pod" podUID="d030c38e-8704-480a-96d3-fa78c83de8a7" pod="kube-system/coredns-5dd5756b68-mn7rg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mn7rg\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.889006    1262 status_manager.go:853] "Failed to get status for pod" podUID="dc92e1e3-f685-4343-b8ed-8c37efb906c6" pod="kube-system/coredns-5dd5756b68-56mr8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-56mr8\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.889283    1262 status_manager.go:853] "Failed to get status for pod" podUID="f26e4ccbbd5847f682171b15b5eb9f92" pod="kube-system/kube-scheduler-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.893806    1262 status_manager.go:853] "Failed to get status for pod" podUID="d1f9d83f-6d5e-40bb-8504-ff7867bea039" pod="kube-system/kube-proxy-vhbwh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vhbwh\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.894271    1262 status_manager.go:853] "Failed to get status for pod" podUID="d030c38e-8704-480a-96d3-fa78c83de8a7" pod="kube-system/coredns-5dd5756b68-mn7rg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mn7rg\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.894639    1262 status_manager.go:853] "Failed to get status for pod" podUID="dc92e1e3-f685-4343-b8ed-8c37efb906c6" pod="kube-system/coredns-5dd5756b68-56mr8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-56mr8\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.894950    1262 status_manager.go:853] "Failed to get status for pod" podUID="f26e4ccbbd5847f682171b15b5eb9f92" pod="kube-system/kube-scheduler-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.895220    1262 status_manager.go:853] "Failed to get status for pod" podUID="410133f71508ed4d79e0c35165939440" pod="kube-system/etcd-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.895644    1262 status_manager.go:853] "Failed to get status for pod" podUID="98d53559b7ed33dbf1c34e63ac43bc9b" pod="kube-system/kube-apiserver-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.895943    1262 status_manager.go:853] "Failed to get status for pod" podUID="5c3b1831477b47ac77b4cd29bf5cc7f1" pod="kube-system/kube-controller-manager-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.202516    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?resourceVersion=0&timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.202829    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203111    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203413    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203713    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203728    1262 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-720999 -n pause-720999
helpers_test.go:261: (dbg) Run:  kubectl --context pause-720999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-720999 -n pause-720999
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-720999 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-720999 logs -n 25: (1.536247176s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl cat cri-docker                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | cri-dockerd --version                                |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl status containerd                          |                          |         |         |                     |                     |
	|         | --all --full --no-pager                              |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl cat containerd                             |                          |         |         |                     |                     |
	|         | --no-pager                                           |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo cat                            | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /etc/containerd/config.toml                          |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | containerd config dump                               |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl status crio --all                          |                          |         |         |                     |                     |
	|         | --full --no-pager                                    |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo                                | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo find                           | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                          |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                          |         |         |                     |                     |
	| ssh     | -p cilium-171116 sudo crio                           | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC |                     |
	|         | config                                               |                          |         |         |                     |                     |
	| delete  | -p cilium-171116                                     | cilium-171116            | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:37 UTC |
	| start   | -p cert-expiration-528457                            | cert-expiration-528457   | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:39 UTC |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                          |         |         |                     |                     |
	|         | --driver=kvm2                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-874915                          | force-systemd-env-874915 | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:37 UTC |
	| start   | -p cert-options-703971                               | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:37 UTC | 04 Oct 23 01:39 UTC |
	|         | --memory=2048                                        |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                            |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                        |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                          |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                     |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                |                          |         |         |                     |                     |
	|         | --driver=kvm2                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| start   | -p pause-720999                                      | pause-720999             | jenkins | v1.31.2 | 04 Oct 23 01:38 UTC | 04 Oct 23 01:39 UTC |
	|         | --alsologtostderr                                    |                          |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| start   | -p NoKubernetes-294276                               | NoKubernetes-294276      | jenkins | v1.31.2 | 04 Oct 23 01:38 UTC | 04 Oct 23 01:39 UTC |
	|         | --no-kubernetes --driver=kvm2                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| delete  | -p NoKubernetes-294276                               | NoKubernetes-294276      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	| start   | -p NoKubernetes-294276                               | NoKubernetes-294276      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC |                     |
	|         | --no-kubernetes --driver=kvm2                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	| ssh     | cert-options-703971 ssh                              | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	|         | openssl x509 -text -noout -in                        |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                          |         |         |                     |                     |
	| ssh     | -p cert-options-703971 -- sudo                       | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                          |         |         |                     |                     |
	| delete  | -p cert-options-703971                               | cert-options-703971      | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC | 04 Oct 23 01:39 UTC |
	| start   | -p old-k8s-version-107182                            | old-k8s-version-107182   | jenkins | v1.31.2 | 04 Oct 23 01:39 UTC |                     |
	|         | --memory=2200                                        |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                          |         |         |                     |                     |
	|         | --kvm-network=default                                |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                          |         |         |                     |                     |
	|         | --keep-context=false                                 |                          |         |         |                     |                     |
	|         | --driver=kvm2                                        |                          |         |         |                     |                     |
	|         | --container-runtime=crio                             |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                          |         |         |                     |                     |
	|---------|------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:39:42
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:39:42.147417  163521 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:39:42.147560  163521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:39:42.147570  163521 out.go:309] Setting ErrFile to fd 2...
	I1004 01:39:42.147575  163521 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:39:42.147783  163521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:39:42.148414  163521 out.go:303] Setting JSON to false
	I1004 01:39:42.149413  163521 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8533,"bootTime":1696375049,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:39:42.149479  163521 start.go:138] virtualization: kvm guest
	I1004 01:39:42.151784  163521 out.go:177] * [old-k8s-version-107182] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:39:42.153634  163521 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:39:42.155072  163521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:39:42.153639  163521 notify.go:220] Checking for updates...
	I1004 01:39:42.157807  163521 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:39:42.159129  163521 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:39:42.160460  163521 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:39:42.161768  163521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:39:42.163843  163521 config.go:182] Loaded profile config "NoKubernetes-294276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1004 01:39:42.163999  163521 config.go:182] Loaded profile config "cert-expiration-528457": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:39:42.164193  163521 config.go:182] Loaded profile config "pause-720999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:39:42.164305  163521 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:39:42.207380  163521 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:39:42.208772  163521 start.go:298] selected driver: kvm2
	I1004 01:39:42.208794  163521 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:39:42.208810  163521 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:39:42.209795  163521 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:39:42.209950  163521 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:39:42.227581  163521 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:39:42.227642  163521 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 01:39:42.227900  163521 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:39:42.227936  163521 cni.go:84] Creating CNI manager for ""
	I1004 01:39:42.227946  163521 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:39:42.227953  163521 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 01:39:42.227959  163521 start_flags.go:321] config:
	{Name:old-k8s-version-107182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-107182 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:39:42.228084  163521 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:39:42.230094  163521 out.go:177] * Starting control plane node old-k8s-version-107182 in cluster old-k8s-version-107182
	I1004 01:39:41.946678  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:41.947152  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:41.947204  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:41.947138  163224 retry.go:31] will retry after 1.107492832s: waiting for machine to come up
	I1004 01:39:43.056438  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:43.056930  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:43.056953  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:43.056870  163224 retry.go:31] will retry after 1.622614723s: waiting for machine to come up
	I1004 01:39:44.681614  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:44.682126  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:44.682149  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:44.682067  163224 retry.go:31] will retry after 1.425874865s: waiting for machine to come up
	I1004 01:39:41.941926  162563 pod_ready.go:102] pod "etcd-pause-720999" in "kube-system" namespace has status "Ready":"False"
	I1004 01:39:44.442856  162563 pod_ready.go:102] pod "etcd-pause-720999" in "kube-system" namespace has status "Ready":"False"
	I1004 01:39:45.942904  162563 pod_ready.go:92] pod "etcd-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.942931  162563 pod_ready.go:81] duration metric: took 8.525578319s waiting for pod "etcd-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.942944  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.949165  162563 pod_ready.go:92] pod "kube-apiserver-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.949191  162563 pod_ready.go:81] duration metric: took 6.240405ms waiting for pod "kube-apiserver-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.949200  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.955779  162563 pod_ready.go:92] pod "kube-controller-manager-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.955800  162563 pod_ready.go:81] duration metric: took 6.594168ms waiting for pod "kube-controller-manager-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.955809  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vhbwh" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.963120  162563 pod_ready.go:92] pod "kube-proxy-vhbwh" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:45.963149  162563 pod_ready.go:81] duration metric: took 7.333086ms waiting for pod "kube-proxy-vhbwh" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:45.963163  162563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:42.231483  163521 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1004 01:39:42.231524  163521 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1004 01:39:42.231531  163521 cache.go:57] Caching tarball of preloaded images
	I1004 01:39:42.231603  163521 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:39:42.231615  163521 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I1004 01:39:42.231699  163521 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/config.json ...
	I1004 01:39:42.231719  163521 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/config.json: {Name:mk9afb4f5618c97beab4c223fd87cbb61bc88a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:39:42.231888  163521 start.go:365] acquiring machines lock for old-k8s-version-107182: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:39:48.346572  162563 pod_ready.go:102] pod "kube-scheduler-pause-720999" in "kube-system" namespace has status "Ready":"False"
	I1004 01:39:49.347131  162563 pod_ready.go:92] pod "kube-scheduler-pause-720999" in "kube-system" namespace has status "Ready":"True"
	I1004 01:39:49.347162  162563 pod_ready.go:81] duration metric: took 3.38399079s waiting for pod "kube-scheduler-pause-720999" in "kube-system" namespace to be "Ready" ...
	I1004 01:39:49.347171  162563 pod_ready.go:38] duration metric: took 11.95714084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:39:49.347187  162563 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:39:49.347234  162563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:39:49.361245  162563 api_server.go:72] duration metric: took 12.093723408s to wait for apiserver process to appear ...
	I1004 01:39:49.361273  162563 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:39:49.361289  162563 api_server.go:253] Checking apiserver healthz at https://192.168.72.236:8443/healthz ...
	I1004 01:39:49.367103  162563 api_server.go:279] https://192.168.72.236:8443/healthz returned 200:
	ok
	I1004 01:39:49.368414  162563 api_server.go:141] control plane version: v1.28.2
	I1004 01:39:49.368435  162563 api_server.go:131] duration metric: took 7.15519ms to wait for apiserver health ...
	I1004 01:39:49.368445  162563 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:39:49.375189  162563 system_pods.go:59] 7 kube-system pods found
	I1004 01:39:49.375224  162563 system_pods.go:61] "coredns-5dd5756b68-56mr8" [dc92e1e3-f685-4343-b8ed-8c37efb906c6] Running
	I1004 01:39:49.375233  162563 system_pods.go:61] "coredns-5dd5756b68-mn7rg" [d030c38e-8704-480a-96d3-fa78c83de8a7] Running
	I1004 01:39:49.375241  162563 system_pods.go:61] "etcd-pause-720999" [16560d3e-b7a0-4154-9e1d-238546be759d] Running
	I1004 01:39:49.375248  162563 system_pods.go:61] "kube-apiserver-pause-720999" [f606e554-c9dc-4537-809e-e18e7956fbea] Running
	I1004 01:39:49.375260  162563 system_pods.go:61] "kube-controller-manager-pause-720999" [36eb35dc-a377-4637-81f0-4b8f518c94db] Running
	I1004 01:39:49.375272  162563 system_pods.go:61] "kube-proxy-vhbwh" [d1f9d83f-6d5e-40bb-8504-ff7867bea039] Running
	I1004 01:39:49.375280  162563 system_pods.go:61] "kube-scheduler-pause-720999" [39970d28-e025-4d86-bb8e-2dca1fbe6009] Running
	I1004 01:39:49.375293  162563 system_pods.go:74] duration metric: took 6.840624ms to wait for pod list to return data ...
	I1004 01:39:49.375306  162563 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:39:49.378065  162563 default_sa.go:45] found service account: "default"
	I1004 01:39:49.378086  162563 default_sa.go:55] duration metric: took 2.768877ms for default service account to be created ...
	I1004 01:39:49.378093  162563 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:39:49.384461  162563 system_pods.go:86] 7 kube-system pods found
	I1004 01:39:49.384499  162563 system_pods.go:89] "coredns-5dd5756b68-56mr8" [dc92e1e3-f685-4343-b8ed-8c37efb906c6] Running
	I1004 01:39:49.384509  162563 system_pods.go:89] "coredns-5dd5756b68-mn7rg" [d030c38e-8704-480a-96d3-fa78c83de8a7] Running
	I1004 01:39:49.384517  162563 system_pods.go:89] "etcd-pause-720999" [16560d3e-b7a0-4154-9e1d-238546be759d] Running
	I1004 01:39:49.384524  162563 system_pods.go:89] "kube-apiserver-pause-720999" [f606e554-c9dc-4537-809e-e18e7956fbea] Running
	I1004 01:39:49.384533  162563 system_pods.go:89] "kube-controller-manager-pause-720999" [36eb35dc-a377-4637-81f0-4b8f518c94db] Running
	I1004 01:39:49.384544  162563 system_pods.go:89] "kube-proxy-vhbwh" [d1f9d83f-6d5e-40bb-8504-ff7867bea039] Running
	I1004 01:39:49.384565  162563 system_pods.go:89] "kube-scheduler-pause-720999" [39970d28-e025-4d86-bb8e-2dca1fbe6009] Running
	I1004 01:39:49.384577  162563 system_pods.go:126] duration metric: took 6.476221ms to wait for k8s-apps to be running ...
	I1004 01:39:49.384592  162563 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:39:49.384661  162563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:39:49.398510  162563 system_svc.go:56] duration metric: took 13.910652ms WaitForService to wait for kubelet.
	I1004 01:39:49.398532  162563 kubeadm.go:581] duration metric: took 12.131018717s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:39:49.398556  162563 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:39:49.537200  162563 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:39:49.537234  162563 node_conditions.go:123] node cpu capacity is 2
	I1004 01:39:49.537245  162563 node_conditions.go:105] duration metric: took 138.684321ms to run NodePressure ...
	I1004 01:39:49.537257  162563 start.go:228] waiting for startup goroutines ...
	I1004 01:39:49.537262  162563 start.go:233] waiting for cluster config update ...
	I1004 01:39:49.537268  162563 start.go:242] writing updated cluster config ...
	I1004 01:39:49.537607  162563 ssh_runner.go:195] Run: rm -f paused
	I1004 01:39:49.589413  162563 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:39:49.591670  162563 out.go:177] * Done! kubectl is now configured to use "pause-720999" cluster and "default" namespace by default
	I1004 01:39:46.109794  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:46.110329  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:46.110356  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:46.110279  163224 retry.go:31] will retry after 1.832698872s: waiting for machine to come up
	I1004 01:39:47.944447  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | domain NoKubernetes-294276 has defined MAC address 52:54:00:b3:ca:87 in network mk-NoKubernetes-294276
	I1004 01:39:47.944863  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | unable to find current IP address of domain NoKubernetes-294276 in network mk-NoKubernetes-294276
	I1004 01:39:47.944888  163202 main.go:141] libmachine: (NoKubernetes-294276) DBG | I1004 01:39:47.944796  163224 retry.go:31] will retry after 3.099894823s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:37:47 UTC, ends at Wed 2023-10-04 01:39:52 UTC. --
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.579299898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383592579281515,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=f6796c2c-3b9e-4670-b80f-b7ed3e016de9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.579999335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9912f00e-0d19-4919-868b-985ea9e2b18a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.580070935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9912f00e-0d19-4919-868b-985ea9e2b18a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.580909177Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9912f00e-0d19-4919-868b-985ea9e2b18a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.630896955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ad4133fa-48c2-418a-afe5-30ec02ec9fdc name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.630955982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ad4133fa-48c2-418a-afe5-30ec02ec9fdc name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.632190674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=166b0493-b646-4cf6-8375-223c0dd2e050 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.632620447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383592632604286,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=166b0493-b646-4cf6-8375-223c0dd2e050 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.633315777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=579bc2aa-d2e2-4611-b60c-999a756d6e53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.633367701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=579bc2aa-d2e2-4611-b60c-999a756d6e53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.633732825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=579bc2aa-d2e2-4611-b60c-999a756d6e53 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.676864537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=34b2c10b-53fb-4b72-9a0a-364349736615 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.676957236Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=34b2c10b-53fb-4b72-9a0a-364349736615 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.681260095Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=545dfe8a-c477-4350-92e6-ff5e24f6a454 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.681717935Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383592681703942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=545dfe8a-c477-4350-92e6-ff5e24f6a454 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.682639880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4ae53c9c-2c28-45de-b816-7caa3d107787 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.682714768Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4ae53c9c-2c28-45de-b816-7caa3d107787 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.683104114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4ae53c9c-2c28-45de-b816-7caa3d107787 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.729345103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6afe431a-dfbb-4a99-a42e-1918fe1c4da5 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.729536354Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6afe431a-dfbb-4a99-a42e-1918fe1c4da5 name=/runtime.v1.RuntimeService/Version
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.730976874Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=56141413-4cb5-4c9e-9736-99bcbdf63254 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.731365366Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696383592731352048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116239,},InodesUsed:&UInt64Value{Value:57,},},},}" file="go-grpc-middleware/chain.go:25" id=56141413-4cb5-4c9e-9736-99bcbdf63254 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.732108638Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b4bc06a7-d3ba-4950-b606-98115f7b29c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.732190765Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b4bc06a7-d3ba-4950-b606-98115f7b29c1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 01:39:52 pause-720999 crio[2335]: time="2023-10-04 01:39:52.732528445Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429,PodSandboxId:eda9d3da24a68e2c2a23603f736967ed1af2e08444b7b9d18618469b80f6c442,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696383579905849455,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePat
h: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f,PodSandboxId:e79601488ecc611cfc48bd5cca4073250e7d7b1c28f14417108a2bd2085d40fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696383571957374552,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7,PodSandboxId:2a342270a1d1a5688b9a30d8e4947ad97e48da7fbb9135a775b83a36c1691b40,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696383570390571603,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027,PodSandboxId:582c5a105962b5121b28b9ce8becd86a381ac78a1d756c7a69f36a545e695e7c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569445325409,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc92e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protoco
l\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81,PodSandboxId:1fba14a54cab5a99acad1e600c5a4499b9db5f2381b775722f1f7ecd63777653,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696383569099360968,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[
string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99,PodSandboxId:bc8f18e7c407afd8a5928e01ee225cd811a1dd40bea131828b26d9378a0a6d93,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696383568056749356,Labels:map[string]string{io.kubernetes.containe
r.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e,PodSandboxId:e5972b4783d0c1d8d2681ba352471a0d7462bf43111737dc1b14962e24fa167f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696383567808933912,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b,PodSandboxId:89bfb8f323bb5713326c13b247329b09c0772bc24b01f1093d08b444f3c9a3d6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,State:CONTAINER_EXITED,CreatedAt:1696383517500723446,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vhbwh
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1f9d83f-6d5e-40bb-8504-ff7867bea039,},Annotations:map[string]string{io.kubernetes.container.hash: af70fa5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a,PodSandboxId:fe3cee9b473db0e97ee11ff90798124a3030b51f6aaaf45dfe85572feee1ed8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383517336163550,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-56mr8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc9
2e1e3-f685-4343-b8ed-8c37efb906c6,},Annotations:map[string]string{io.kubernetes.container.hash: ddd07930,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc,PodSandboxId:30e74496263eb848773490a1e675744d2a4f7d2467dfcb015128b7b8be021b7a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1696383516890895723,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-mn7rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d030c38e-8704-480a-96d3-fa78c83de8a7,},Annotations:map[string]string{io.kubernetes.container.hash: bbb4c434,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a,PodSandboxId:acd863299380feef943e639b74bb0769f4d9d9b00bf1b2ede8827bab3e48ea93,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]stri
ng{},},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1696383492825125487,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 410133f71508ed4d79e0c35165939440,},Annotations:map[string]string{io.kubernetes.container.hash: 92093598,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a,PodSandboxId:48b966aa64d189f1d2aee50e10bd070da3169deeb7e36f5eaa21d532c2c278b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac46
5a7b8,State:CONTAINER_EXITED,CreatedAt:1696383492500765597,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f26e4ccbbd5847f682171b15b5eb9f92,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f,PodSandboxId:6e3c198c357c69b59be922d781589acc19daab442d1a98af149f6ef631c2d4ac,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,State:CONTAINER_EXITED,CreatedAt:1696383492302841
348,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 98d53559b7ed33dbf1c34e63ac43bc9b,},Annotations:map[string]string{io.kubernetes.container.hash: 26bfd4ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1,PodSandboxId:f20a28829c9573d668c6a157ec53281ef75306a54de145b5f388eba1c7cda195,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,State:CONTAINER_EXITED,CreatedAt:1696383492255422245,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-720999,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c3b1831477b47ac77b4cd29bf5cc7f1,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b4bc06a7-d3ba-4950-b606-98115f7b29c1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d3d7cbb80e8c6       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   12 seconds ago       Running             kube-proxy                1                   eda9d3da24a68       kube-proxy-vhbwh
	fb457dc442ba5       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   20 seconds ago       Running             kube-apiserver            1                   e79601488ecc6       kube-apiserver-pause-720999
	7849aa199d9a7       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   22 seconds ago       Running             kube-scheduler            1                   2a342270a1d1a       kube-scheduler-pause-720999
	48bd1feb06d79       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   23 seconds ago       Running             coredns                   1                   582c5a105962b       coredns-5dd5756b68-56mr8
	598e6482466b3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   23 seconds ago       Running             coredns                   1                   1fba14a54cab5       coredns-5dd5756b68-mn7rg
	3451fa7631180       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago       Running             etcd                      1                   bc8f18e7c407a       etcd-pause-720999
	fd606a2e48532       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   24 seconds ago       Running             kube-controller-manager   1                   e5972b4783d0c       kube-controller-manager-pause-720999
	f4d88e5fa1102       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   About a minute ago   Exited              kube-proxy                0                   89bfb8f323bb5       kube-proxy-vhbwh
	087ed9e1b1481       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   fe3cee9b473db       coredns-5dd5756b68-56mr8
	b036b1ad5d034       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   About a minute ago   Exited              coredns                   0                   30e74496263eb       coredns-5dd5756b68-mn7rg
	213bf944bc856       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   About a minute ago   Exited              etcd                      0                   acd863299380f       etcd-pause-720999
	a199e1ab47e3d       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   About a minute ago   Exited              kube-scheduler            0                   48b966aa64d18       kube-scheduler-pause-720999
	1aac370f99f26       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   About a minute ago   Exited              kube-apiserver            0                   6e3c198c357c6       kube-apiserver-pause-720999
	d21ddb214823f       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   About a minute ago   Exited              kube-controller-manager   0                   f20a28829c957       kube-controller-manager-pause-720999
	
	* 
	* ==> coredns [087ed9e1b148173be1f513ba0bdc2171a7524f06d975a0552211c65e3988100a] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53302 - 31300 "HINFO IN 6050563274851998706.4123623980529176257. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.058679919s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [48bd1feb06d79d9f77d83e2dd6c27f45eef32215a835367685246c6f6d4c1027] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57867 - 37362 "HINFO IN 1265801200153013347.8888941506576543796. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024790318s
	
	* 
	* ==> coredns [598e6482466b396e055c11218d32357d284bf908055d91bd67bca8f077e3de81] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43372 - 33168 "HINFO IN 5110613864747501214.5442164686957156669. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02058181s
	
	* 
	* ==> coredns [b036b1ad5d034b17209181bb56bf5b8138c1e313a2ef6d45857c097b59a528dc] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-720999
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-720999
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=pause-720999
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_38_20_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:38:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-720999
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 01:39:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 01:38:41 +0000   Wed, 04 Oct 2023 01:38:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.236
	  Hostname:    pause-720999
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 5022fa1f078044a78d269407b53865a5
	  System UUID:                5022fa1f-0780-44a7-8d26-9407b53865a5
	  Boot ID:                    66d8928c-67ba-4828-ad25-5e765bd11d46
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-56mr8                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 coredns-5dd5756b68-mn7rg                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     79s
	  kube-system                 etcd-pause-720999                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         93s
	  kube-system                 kube-apiserver-pause-720999             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-controller-manager-pause-720999    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                 kube-proxy-vhbwh                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-pause-720999             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (12%!)(MISSING)  340Mi (17%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 75s                  kube-proxy       
	  Normal   Starting                 12s                  kube-proxy       
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node pause-720999 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node pause-720999 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     102s (x7 over 102s)  kubelet          Node pause-720999 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  93s                  kubelet          Node pause-720999 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s                  kubelet          Node pause-720999 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     93s                  kubelet          Node pause-720999 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  93s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                93s                  kubelet          Node pause-720999 status is now: NodeReady
	  Normal   Starting                 93s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           80s                  node-controller  Node pause-720999 event: Registered Node pause-720999 in Controller
	  Warning  ContainerGCFailed        33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           6s                   node-controller  Node pause-720999 event: Registered Node pause-720999 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074775] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.574961] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.388278] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141929] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.110435] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.334693] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.127393] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.153129] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[Oct 4 01:38] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.226334] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +9.708475] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +9.282396] systemd-fstab-generator[1255]: Ignoring "noauto" for root device
	[Oct 4 01:39] systemd-fstab-generator[2178]: Ignoring "noauto" for root device
	[  +0.157925] systemd-fstab-generator[2189]: Ignoring "noauto" for root device
	[  +0.103659] kauditd_printk_skb: 30 callbacks suppressed
	[  +0.184632] systemd-fstab-generator[2253]: Ignoring "noauto" for root device
	[  +0.130318] systemd-fstab-generator[2264]: Ignoring "noauto" for root device
	[  +0.335374] systemd-fstab-generator[2287]: Ignoring "noauto" for root device
	[ +15.981829] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [213bf944bc856de8212febc0357644d5a0331e902119ea6c3a15764e7a59819a] <==
	* {"level":"warn","ts":"2023-10-04T01:38:34.142975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.316702Z","time spent":"826.267363ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":233,"request content":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.142223Z","caller":"traceutil/trace.go:171","msg":"trace[817942963] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:325; }","duration":"833.467031ms","start":"2023-10-04T01:38:33.308752Z","end":"2023-10-04T01:38:34.142219Z","steps":["trace[817942963] 'agreement among raft nodes before linearized reading'  (duration: 830.760391ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143132Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.308741Z","time spent":"834.385121ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":1,"response size":231,"request content":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.142244Z","caller":"traceutil/trace.go:171","msg":"trace[852715553] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:325; }","duration":"842.519209ms","start":"2023-10-04T01:38:33.299711Z","end":"2023-10-04T01:38:34.142231Z","steps":["trace[852715553] 'agreement among raft nodes before linearized reading'  (duration: 839.827397ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143282Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.299699Z","time spent":"843.575867ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":1,"response size":221,"request content":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.142309Z","caller":"traceutil/trace.go:171","msg":"trace[125518939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:325; }","duration":"853.579321ms","start":"2023-10-04T01:38:33.288726Z","end":"2023-10-04T01:38:34.142305Z","steps":["trace[125518939] 'agreement among raft nodes before linearized reading'  (duration: 849.682034ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143421Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.288711Z","time spent":"854.703273ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":1,"response size":242,"request content":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:34.141866Z","caller":"traceutil/trace.go:171","msg":"trace[1563608939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:325; }","duration":"800.76143ms","start":"2023-10-04T01:38:33.341099Z","end":"2023-10-04T01:38:34.14186Z","steps":["trace[1563608939] 'agreement among raft nodes before linearized reading'  (duration: 798.273287ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:34.143651Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:33.341092Z","time spent":"802.551376ms","remote":"127.0.0.1:40242","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":258,"request content":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" "}
	{"level":"info","ts":"2023-10-04T01:38:35.621101Z","caller":"traceutil/trace.go:171","msg":"trace[1772072781] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"168.762795ms","start":"2023-10-04T01:38:35.452325Z","end":"2023-10-04T01:38:35.621088Z","steps":["trace[1772072781] 'process raft request'  (duration: 168.625071ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:38:35.676193Z","caller":"traceutil/trace.go:171","msg":"trace[1078971930] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"165.379647ms","start":"2023-10-04T01:38:35.510794Z","end":"2023-10-04T01:38:35.676174Z","steps":["trace[1078971930] 'process raft request'  (duration: 165.182894ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:38:35.863659Z","caller":"traceutil/trace.go:171","msg":"trace[989546584] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"182.40217ms","start":"2023-10-04T01:38:35.681235Z","end":"2023-10-04T01:38:35.863638Z","steps":["trace[989546584] 'process raft request'  (duration: 120.106293ms)","trace[989546584] 'compare'  (duration: 61.990615ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T01:38:58.401095Z","caller":"traceutil/trace.go:171","msg":"trace[782125229] transaction","detail":"{read_only:false; response_revision:410; number_of_response:1; }","duration":"362.953778ms","start":"2023-10-04T01:38:58.03808Z","end":"2023-10-04T01:38:58.401033Z","steps":["trace[782125229] 'process raft request'  (duration: 362.573918ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:38:58.402163Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:38:58.038058Z","time spent":"364.007066ms","remote":"127.0.0.1:40256","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":676,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-jbfvhtrazaplv5f4fazfhqxxfm\" mod_revision:407 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-jbfvhtrazaplv5f4fazfhqxxfm\" value_size:603 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-jbfvhtrazaplv5f4fazfhqxxfm\" > >"}
	{"level":"warn","ts":"2023-10-04T01:38:58.567684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.757523ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2166384219963067625 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:1e108af855e90ce8>","response":"size:41"}
	{"level":"info","ts":"2023-10-04T01:39:17.112668Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-10-04T01:39:17.112798Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-720999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.236:2380"],"advertise-client-urls":["https://192.168.72.236:2379"]}
	{"level":"warn","ts":"2023-10-04T01:39:17.112909Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T01:39:17.113023Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T01:39:17.27118Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.236:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-10-04T01:39:17.271296Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.236:2379: use of closed network connection"}
	{"level":"info","ts":"2023-10-04T01:39:17.271393Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8ec6fb919e971e10","current-leader-member-id":"8ec6fb919e971e10"}
	{"level":"info","ts":"2023-10-04T01:39:17.275228Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:17.275377Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:17.27541Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-720999","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.236:2380"],"advertise-client-urls":["https://192.168.72.236:2379"]}
	
	* 
	* ==> etcd [3451fa76311802541ebcbf5bfd3f569da48cb45c0fc5bc19d5931d8dee7bbc99] <==
	* {"level":"info","ts":"2023-10-04T01:39:30.212961Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T01:39:30.213003Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-10-04T01:39:30.213313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 switched to configuration voters=(10288187001624010256)"}
	{"level":"info","ts":"2023-10-04T01:39:30.213429Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b1c7f4697fbaf59f","local-member-id":"8ec6fb919e971e10","added-peer-id":"8ec6fb919e971e10","added-peer-peer-urls":["https://192.168.72.236:2380"]}
	{"level":"info","ts":"2023-10-04T01:39:30.21376Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b1c7f4697fbaf59f","local-member-id":"8ec6fb919e971e10","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:39:30.213832Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:39:30.256761Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-04T01:39:30.257325Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:30.257647Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.236:2380"}
	{"level":"info","ts":"2023-10-04T01:39:30.26093Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"8ec6fb919e971e10","initial-advertise-peer-urls":["https://192.168.72.236:2380"],"listen-peer-urls":["https://192.168.72.236:2380"],"advertise-client-urls":["https://192.168.72.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:39:30.261089Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:39:31.118081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T01:39:31.118235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:39:31.118322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 received MsgPreVoteResp from 8ec6fb919e971e10 at term 2"}
	{"level":"info","ts":"2023-10-04T01:39:31.118379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.118488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 received MsgVoteResp from 8ec6fb919e971e10 at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.121654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8ec6fb919e971e10 became leader at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.121722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8ec6fb919e971e10 elected leader 8ec6fb919e971e10 at term 3"}
	{"level":"info","ts":"2023-10-04T01:39:31.124921Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8ec6fb919e971e10","local-member-attributes":"{Name:pause-720999 ClientURLs:[https://192.168.72.236:2379]}","request-path":"/0/members/8ec6fb919e971e10/attributes","cluster-id":"b1c7f4697fbaf59f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:39:31.125016Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:39:31.127328Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:39:31.129022Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:39:31.130718Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.236:2379"}
	{"level":"info","ts":"2023-10-04T01:39:31.145598Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:39:31.145708Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  01:39:53 up 2 min,  0 users,  load average: 1.65, 0.61, 0.22
	Linux pause-720999 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1aac370f99f26194ef789282e60a1cfc0da14a25f8578451781fb0538f8c440f] <==
	* I1004 01:38:34.235164       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1004 01:39:17.112789       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E1004 01:39:17.136744       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E1004 01:39:17.136912       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I1004 01:39:17.137095       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 01:39:17.137299       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I1004 01:39:17.138011       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I1004 01:39:17.139778       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1004 01:39:17.139846       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I1004 01:39:17.142141       1 controller.go:129] Ending legacy_token_tracking_controller
	I1004 01:39:17.142191       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I1004 01:39:17.142250       1 apiservice_controller.go:131] Shutting down APIServiceRegistrationController
	I1004 01:39:17.142294       1 available_controller.go:439] Shutting down AvailableConditionController
	I1004 01:39:17.142326       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I1004 01:39:17.142364       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I1004 01:39:17.142409       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1004 01:39:17.142447       1 autoregister_controller.go:165] Shutting down autoregister controller
	I1004 01:39:17.142982       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I1004 01:39:17.143034       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I1004 01:39:17.149685       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I1004 01:39:17.149762       1 establishing_controller.go:87] Shutting down EstablishingController
	I1004 01:39:17.149808       1 naming_controller.go:302] Shutting down NamingConditionController
	I1004 01:39:17.149855       1 controller.go:115] Shutting down OpenAPI V3 controller
	I1004 01:39:17.149906       1 controller.go:162] Shutting down OpenAPI controller
	I1004 01:39:17.149956       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	
	* 
	* ==> kube-apiserver [fb457dc442ba5f4655a3936c0b891778ea2f2044f7d6d95799aa02dbe488082f] <==
	* I1004 01:39:35.132574       1 controller.go:78] Starting OpenAPI AggregationController
	I1004 01:39:35.135382       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I1004 01:39:35.135395       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I1004 01:39:35.206572       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1004 01:39:35.207603       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1004 01:39:35.324053       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1004 01:39:35.324136       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1004 01:39:35.324620       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1004 01:39:35.329857       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1004 01:39:35.331580       1 shared_informer.go:318] Caches are synced for configmaps
	I1004 01:39:35.331656       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1004 01:39:35.333157       1 aggregator.go:166] initial CRD sync complete...
	I1004 01:39:35.333233       1 autoregister_controller.go:141] Starting autoregister controller
	I1004 01:39:35.333264       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1004 01:39:35.333296       1 cache.go:39] Caches are synced for autoregister controller
	E1004 01:39:35.334322       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1004 01:39:35.335875       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1004 01:39:35.350548       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1004 01:39:35.399832       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1004 01:39:35.424035       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1004 01:39:36.140045       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	E1004 01:39:45.325000       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I1004 01:39:47.203732       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1004 01:39:47.204086       1 controller.go:624] quota admission added evaluator for: endpoints
	I1004 01:39:47.243975       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [d21ddb214823fb98a25736dfd57a806731c8d08fad04fc4f6bccb2faf8ffa0c1] <==
	* I1004 01:38:33.382695       1 shared_informer.go:318] Caches are synced for HPA
	I1004 01:38:33.404227       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1004 01:38:33.440051       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:38:33.448242       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:38:33.483553       1 shared_informer.go:318] Caches are synced for endpoint
	I1004 01:38:33.483603       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1004 01:38:33.535070       1 shared_informer.go:318] Caches are synced for attach detach
	I1004 01:38:33.882786       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:38:33.882818       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1004 01:38:33.911659       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:38:34.178955       1 range_allocator.go:380] "Set node PodCIDR" node="pause-720999" podCIDRs=["10.244.0.0/24"]
	I1004 01:38:34.258861       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1004 01:38:34.298559       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vhbwh"
	I1004 01:38:34.346778       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mn7rg"
	I1004 01:38:34.414855       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-56mr8"
	I1004 01:38:34.452171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="194.162472ms"
	I1004 01:38:34.522672       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.337218ms"
	I1004 01:38:34.522977       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.218µs"
	I1004 01:38:34.540808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.214µs"
	I1004 01:38:37.573896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="368.029µs"
	I1004 01:38:38.580377       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="119.943µs"
	I1004 01:38:38.650258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.123293ms"
	I1004 01:38:38.653724       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.747µs"
	I1004 01:38:38.721822       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.01581ms"
	I1004 01:38:38.722015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.764µs"
	
	* 
	* ==> kube-controller-manager [fd606a2e48532487e3beed899b2bef834b9176ce4559f1451a78d9c9d6ab830e] <==
	* I1004 01:39:47.218547       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:39:47.220590       1 shared_informer.go:318] Caches are synced for resource quota
	I1004 01:39:47.221109       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1004 01:39:47.226137       1 shared_informer.go:318] Caches are synced for job
	I1004 01:39:47.229148       1 shared_informer.go:318] Caches are synced for HPA
	I1004 01:39:47.231711       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1004 01:39:47.232063       1 shared_informer.go:318] Caches are synced for stateful set
	I1004 01:39:47.235732       1 shared_informer.go:318] Caches are synced for cronjob
	I1004 01:39:47.238542       1 shared_informer.go:318] Caches are synced for PVC protection
	I1004 01:39:47.238840       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I1004 01:39:47.238707       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I1004 01:39:47.242779       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I1004 01:39:47.242939       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1004 01:39:47.249974       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1004 01:39:47.254400       1 shared_informer.go:318] Caches are synced for ephemeral
	I1004 01:39:47.257922       1 shared_informer.go:318] Caches are synced for daemon sets
	I1004 01:39:47.261927       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-mn7rg"
	I1004 01:39:47.270074       1 shared_informer.go:318] Caches are synced for persistent volume
	I1004 01:39:47.296516       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="64.438454ms"
	I1004 01:39:47.308758       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.460094ms"
	I1004 01:39:47.310044       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.29µs"
	I1004 01:39:47.382324       1 shared_informer.go:318] Caches are synced for attach detach
	I1004 01:39:47.776214       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:39:47.777549       1 shared_informer.go:318] Caches are synced for garbage collector
	I1004 01:39:47.777619       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [d3d7cbb80e8c6eb2decee94d8a4a62da3e17738bd6a97555c3a34e057ecf6429] <==
	* I1004 01:39:40.147526       1 server_others.go:69] "Using iptables proxy"
	I1004 01:39:40.163235       1 node.go:141] Successfully retrieved node IP: 192.168.72.236
	I1004 01:39:40.222721       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:39:40.222787       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:39:40.226820       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:39:40.227081       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:39:40.227678       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:39:40.228037       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:39:40.230081       1 config.go:188] "Starting service config controller"
	I1004 01:39:40.230244       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:39:40.230611       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:39:40.230809       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:39:40.233857       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:39:40.230965       1 config.go:315] "Starting node config controller"
	I1004 01:39:40.234209       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:39:40.330867       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:39:40.335064       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f4d88e5fa1102f4fec5aa81580401c50638486f2a27b4e1d41c2809cce03cc4b] <==
	* I1004 01:38:37.819007       1 server_others.go:69] "Using iptables proxy"
	I1004 01:38:37.844110       1 node.go:141] Successfully retrieved node IP: 192.168.72.236
	I1004 01:38:37.906672       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:38:37.906747       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:38:37.911104       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:38:37.911276       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:38:37.912024       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:38:37.912080       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:38:37.914093       1 config.go:188] "Starting service config controller"
	I1004 01:38:37.914606       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:38:37.914695       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:38:37.914723       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:38:37.916293       1 config.go:315] "Starting node config controller"
	I1004 01:38:37.916342       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:38:38.014980       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:38:38.015201       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:38:38.017584       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [7849aa199d9a79ec26c227b11a9852a46b32d73a55138f268338a9fc72c37ab7] <==
	* I1004 01:39:32.271537       1 serving.go:348] Generated self-signed cert in-memory
	W1004 01:39:35.256758       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1004 01:39:35.256865       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:39:35.256897       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1004 01:39:35.256925       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1004 01:39:35.346979       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1004 01:39:35.347132       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:39:35.366260       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 01:39:35.366363       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:39:35.370796       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 01:39:35.370897       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 01:39:35.466695       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [a199e1ab47e3d82b1c53739d525bc7a756bc946141d39558240a9acbc75fa00a] <==
	* W1004 01:38:16.713858       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:38:16.713867       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 01:38:16.713920       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:38:16.713930       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 01:38:17.540834       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 01:38:17.540961       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 01:38:17.551088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:38:17.551161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 01:38:17.559596       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:38:17.559689       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 01:38:17.658790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:38:17.658887       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 01:38:17.700428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 01:38:17.700507       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 01:38:17.710800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 01:38:17.710884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 01:38:17.956979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:38:17.957039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 01:38:18.174552       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:38:18.174716       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1004 01:38:20.100796       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:39:17.129097       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1004 01:39:17.129394       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1004 01:39:17.130288       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1004 01:39:17.132222       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:37:47 UTC, ends at Wed 2023-10-04 01:39:53 UTC. --
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.386173    1262 status_manager.go:853] "Failed to get status for pod" podUID="98d53559b7ed33dbf1c34e63ac43bc9b" pod="kube-system/kube-apiserver-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.387155    1262 status_manager.go:853] "Failed to get status for pod" podUID="5c3b1831477b47ac77b4cd29bf5cc7f1" pod="kube-system/kube-controller-manager-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.387823    1262 status_manager.go:853] "Failed to get status for pod" podUID="d1f9d83f-6d5e-40bb-8504-ff7867bea039" pod="kube-system/kube-proxy-vhbwh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vhbwh\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.388153    1262 status_manager.go:853] "Failed to get status for pod" podUID="d030c38e-8704-480a-96d3-fa78c83de8a7" pod="kube-system/coredns-5dd5756b68-mn7rg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mn7rg\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.388415    1262 status_manager.go:853] "Failed to get status for pod" podUID="dc92e1e3-f685-4343-b8ed-8c37efb906c6" pod="kube-system/coredns-5dd5756b68-56mr8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-56mr8\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.887260    1262 status_manager.go:853] "Failed to get status for pod" podUID="410133f71508ed4d79e0c35165939440" pod="kube-system/etcd-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.887838    1262 status_manager.go:853] "Failed to get status for pod" podUID="98d53559b7ed33dbf1c34e63ac43bc9b" pod="kube-system/kube-apiserver-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.888092    1262 status_manager.go:853] "Failed to get status for pod" podUID="5c3b1831477b47ac77b4cd29bf5cc7f1" pod="kube-system/kube-controller-manager-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.888392    1262 status_manager.go:853] "Failed to get status for pod" podUID="d1f9d83f-6d5e-40bb-8504-ff7867bea039" pod="kube-system/kube-proxy-vhbwh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vhbwh\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.888741    1262 status_manager.go:853] "Failed to get status for pod" podUID="d030c38e-8704-480a-96d3-fa78c83de8a7" pod="kube-system/coredns-5dd5756b68-mn7rg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mn7rg\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.889006    1262 status_manager.go:853] "Failed to get status for pod" podUID="dc92e1e3-f685-4343-b8ed-8c37efb906c6" pod="kube-system/coredns-5dd5756b68-56mr8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-56mr8\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:30 pause-720999 kubelet[1262]: I1004 01:39:30.889283    1262 status_manager.go:853] "Failed to get status for pod" podUID="f26e4ccbbd5847f682171b15b5eb9f92" pod="kube-system/kube-scheduler-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.893806    1262 status_manager.go:853] "Failed to get status for pod" podUID="d1f9d83f-6d5e-40bb-8504-ff7867bea039" pod="kube-system/kube-proxy-vhbwh" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vhbwh\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.894271    1262 status_manager.go:853] "Failed to get status for pod" podUID="d030c38e-8704-480a-96d3-fa78c83de8a7" pod="kube-system/coredns-5dd5756b68-mn7rg" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-mn7rg\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.894639    1262 status_manager.go:853] "Failed to get status for pod" podUID="dc92e1e3-f685-4343-b8ed-8c37efb906c6" pod="kube-system/coredns-5dd5756b68-56mr8" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-56mr8\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.894950    1262 status_manager.go:853] "Failed to get status for pod" podUID="f26e4ccbbd5847f682171b15b5eb9f92" pod="kube-system/kube-scheduler-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.895220    1262 status_manager.go:853] "Failed to get status for pod" podUID="410133f71508ed4d79e0c35165939440" pod="kube-system/etcd-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/etcd-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.895644    1262 status_manager.go:853] "Failed to get status for pod" podUID="98d53559b7ed33dbf1c34e63ac43bc9b" pod="kube-system/kube-apiserver-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:31 pause-720999 kubelet[1262]: I1004 01:39:31.895943    1262 status_manager.go:853] "Failed to get status for pod" podUID="5c3b1831477b47ac77b4cd29bf5cc7f1" pod="kube-system/kube-controller-manager-pause-720999" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-720999\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.202516    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?resourceVersion=0&timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.202829    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203111    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203413    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203713    1262 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"pause-720999\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-720999?timeout=10s\": dial tcp 192.168.72.236:8443: connect: connection refused"
	Oct 04 01:39:32 pause-720999 kubelet[1262]: E1004 01:39:32.203728    1262 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-720999 -n pause-720999
helpers_test.go:261: (dbg) Run:  kubectl --context pause-720999 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (72.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-273516 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-273516 --alsologtostderr -v=3: exit status 82 (2m1.429876169s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-273516"  ...
	* Stopping node "no-preload-273516"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:42:09.633987  165253 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:42:09.634264  165253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:42:09.634275  165253 out.go:309] Setting ErrFile to fd 2...
	I1004 01:42:09.634282  165253 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:42:09.634478  165253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:42:09.634732  165253 out.go:303] Setting JSON to false
	I1004 01:42:09.634831  165253 mustload.go:65] Loading cluster: no-preload-273516
	I1004 01:42:09.635275  165253 config.go:182] Loaded profile config "no-preload-273516": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:42:09.635371  165253 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/no-preload-273516/config.json ...
	I1004 01:42:09.635551  165253 mustload.go:65] Loading cluster: no-preload-273516
	I1004 01:42:09.635693  165253 config.go:182] Loaded profile config "no-preload-273516": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:42:09.635738  165253 stop.go:39] StopHost: no-preload-273516
	I1004 01:42:09.636116  165253 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:42:09.636199  165253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:42:09.652501  165253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33199
	I1004 01:42:09.653119  165253 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:42:09.653723  165253 main.go:141] libmachine: Using API Version  1
	I1004 01:42:09.653748  165253 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:42:09.654132  165253 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:42:09.657251  165253 out.go:177] * Stopping node "no-preload-273516"  ...
	I1004 01:42:09.658815  165253 main.go:141] libmachine: Stopping "no-preload-273516"...
	I1004 01:42:09.658856  165253 main.go:141] libmachine: (no-preload-273516) Calling .GetState
	I1004 01:42:09.660855  165253 main.go:141] libmachine: (no-preload-273516) Calling .Stop
	I1004 01:42:09.664764  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 0/60
	I1004 01:42:10.666337  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 1/60
	I1004 01:42:11.668440  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 2/60
	I1004 01:42:12.669660  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 3/60
	I1004 01:42:13.671326  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 4/60
	I1004 01:42:14.673330  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 5/60
	I1004 01:42:15.674876  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 6/60
	I1004 01:42:16.676403  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 7/60
	I1004 01:42:17.677775  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 8/60
	I1004 01:42:18.679378  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 9/60
	I1004 01:42:19.681874  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 10/60
	I1004 01:42:20.684363  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 11/60
	I1004 01:42:21.686519  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 12/60
	I1004 01:42:22.688671  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 13/60
	I1004 01:42:23.691178  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 14/60
	I1004 01:42:24.693036  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 15/60
	I1004 01:42:25.695435  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 16/60
	I1004 01:42:26.697450  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 17/60
	I1004 01:42:27.698935  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 18/60
	I1004 01:42:28.700384  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 19/60
	I1004 01:42:29.701990  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 20/60
	I1004 01:42:30.704414  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 21/60
	I1004 01:42:31.705915  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 22/60
	I1004 01:42:32.707486  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 23/60
	I1004 01:42:33.708848  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 24/60
	I1004 01:42:34.711070  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 25/60
	I1004 01:42:35.713018  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 26/60
	I1004 01:42:36.714934  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 27/60
	I1004 01:42:37.716525  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 28/60
	I1004 01:42:38.718227  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 29/60
	I1004 01:42:39.720419  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 30/60
	I1004 01:42:40.721825  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 31/60
	I1004 01:42:41.723496  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 32/60
	I1004 01:42:42.724801  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 33/60
	I1004 01:42:43.726322  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 34/60
	I1004 01:42:44.728499  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 35/60
	I1004 01:42:45.730241  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 36/60
	I1004 01:42:46.731901  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 37/60
	I1004 01:42:47.733788  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 38/60
	I1004 01:42:48.735052  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 39/60
	I1004 01:42:49.737198  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 40/60
	I1004 01:42:50.738936  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 41/60
	I1004 01:42:51.740570  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 42/60
	I1004 01:42:52.742227  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 43/60
	I1004 01:42:53.743756  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 44/60
	I1004 01:42:54.745992  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 45/60
	I1004 01:42:55.747733  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 46/60
	I1004 01:42:56.749145  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 47/60
	I1004 01:42:57.750513  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 48/60
	I1004 01:42:58.752230  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 49/60
	I1004 01:42:59.754522  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 50/60
	I1004 01:43:00.756037  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 51/60
	I1004 01:43:01.757696  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 52/60
	I1004 01:43:02.759118  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 53/60
	I1004 01:43:03.760407  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 54/60
	I1004 01:43:04.762521  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 55/60
	I1004 01:43:05.764253  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 56/60
	I1004 01:43:06.766805  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 57/60
	I1004 01:43:07.768491  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 58/60
	I1004 01:43:08.770566  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 59/60
	I1004 01:43:09.771405  165253 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:43:09.771494  165253 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:43:09.771518  165253 retry.go:31] will retry after 1.113713568s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:43:10.885782  165253 stop.go:39] StopHost: no-preload-273516
	I1004 01:43:10.886208  165253 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:43:10.886258  165253 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:43:10.900847  165253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39935
	I1004 01:43:10.901339  165253 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:43:10.901951  165253 main.go:141] libmachine: Using API Version  1
	I1004 01:43:10.901983  165253 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:43:10.902307  165253 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:43:10.904451  165253 out.go:177] * Stopping node "no-preload-273516"  ...
	I1004 01:43:10.905855  165253 main.go:141] libmachine: Stopping "no-preload-273516"...
	I1004 01:43:10.905874  165253 main.go:141] libmachine: (no-preload-273516) Calling .GetState
	I1004 01:43:10.907469  165253 main.go:141] libmachine: (no-preload-273516) Calling .Stop
	I1004 01:43:10.911159  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 0/60
	I1004 01:43:11.912389  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 1/60
	I1004 01:43:12.913898  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 2/60
	I1004 01:43:13.915531  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 3/60
	I1004 01:43:14.916948  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 4/60
	I1004 01:43:15.918884  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 5/60
	I1004 01:43:16.920393  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 6/60
	I1004 01:43:17.921741  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 7/60
	I1004 01:43:18.923181  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 8/60
	I1004 01:43:19.925485  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 9/60
	I1004 01:43:20.927972  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 10/60
	I1004 01:43:21.929590  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 11/60
	I1004 01:43:22.931169  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 12/60
	I1004 01:43:23.932812  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 13/60
	I1004 01:43:24.934107  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 14/60
	I1004 01:43:25.935995  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 15/60
	I1004 01:43:26.937395  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 16/60
	I1004 01:43:27.938926  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 17/60
	I1004 01:43:28.940506  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 18/60
	I1004 01:43:29.942125  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 19/60
	I1004 01:43:30.943499  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 20/60
	I1004 01:43:31.944923  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 21/60
	I1004 01:43:32.946448  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 22/60
	I1004 01:43:33.948116  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 23/60
	I1004 01:43:34.949735  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 24/60
	I1004 01:43:35.951947  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 25/60
	I1004 01:43:36.953797  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 26/60
	I1004 01:43:37.956064  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 27/60
	I1004 01:43:38.957680  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 28/60
	I1004 01:43:39.959111  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 29/60
	I1004 01:43:40.960682  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 30/60
	I1004 01:43:41.962757  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 31/60
	I1004 01:43:42.964650  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 32/60
	I1004 01:43:43.966346  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 33/60
	I1004 01:43:44.968697  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 34/60
	I1004 01:43:45.970527  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 35/60
	I1004 01:43:46.972384  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 36/60
	I1004 01:43:47.973788  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 37/60
	I1004 01:43:48.976140  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 38/60
	I1004 01:43:49.977592  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 39/60
	I1004 01:43:50.979699  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 40/60
	I1004 01:43:51.981098  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 41/60
	I1004 01:43:52.983013  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 42/60
	I1004 01:43:53.984401  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 43/60
	I1004 01:43:54.986616  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 44/60
	I1004 01:43:55.988805  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 45/60
	I1004 01:43:56.990109  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 46/60
	I1004 01:43:57.991620  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 47/60
	I1004 01:43:58.993976  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 48/60
	I1004 01:43:59.995448  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 49/60
	I1004 01:44:00.996715  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 50/60
	I1004 01:44:01.998234  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 51/60
	I1004 01:44:02.999685  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 52/60
	I1004 01:44:04.001514  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 53/60
	I1004 01:44:05.002630  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 54/60
	I1004 01:44:06.004957  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 55/60
	I1004 01:44:07.006434  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 56/60
	I1004 01:44:08.007722  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 57/60
	I1004 01:44:09.009516  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 58/60
	I1004 01:44:10.011019  165253 main.go:141] libmachine: (no-preload-273516) Waiting for machine to stop 59/60
	I1004 01:44:11.012477  165253 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:44:11.012525  165253 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:44:11.014784  165253 out.go:177] 
	W1004 01:44:11.016465  165253 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 01:44:11.016482  165253 out.go:239] * 
	* 
	W1004 01:44:11.018905  165253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 01:44:11.020519  165253 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-273516 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516: exit status 3 (18.428612353s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:29.450257  166455 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host
	E1004 01:44:29.450300  166455 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-273516" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-509298 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-509298 --alsologtostderr -v=3: exit status 82 (2m1.932608408s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-509298"  ...
	* Stopping node "embed-certs-509298"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:42:32.385292  165496 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:42:32.385409  165496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:42:32.385418  165496 out.go:309] Setting ErrFile to fd 2...
	I1004 01:42:32.385423  165496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:42:32.385590  165496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:42:32.385816  165496 out.go:303] Setting JSON to false
	I1004 01:42:32.385915  165496 mustload.go:65] Loading cluster: embed-certs-509298
	I1004 01:42:32.386302  165496 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:42:32.386371  165496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/embed-certs-509298/config.json ...
	I1004 01:42:32.386532  165496 mustload.go:65] Loading cluster: embed-certs-509298
	I1004 01:42:32.386631  165496 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:42:32.386656  165496 stop.go:39] StopHost: embed-certs-509298
	I1004 01:42:32.386997  165496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:42:32.387047  165496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:42:32.401546  165496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36031
	I1004 01:42:32.402067  165496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:42:32.402609  165496 main.go:141] libmachine: Using API Version  1
	I1004 01:42:32.402630  165496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:42:32.402981  165496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:42:32.405344  165496 out.go:177] * Stopping node "embed-certs-509298"  ...
	I1004 01:42:32.406702  165496 main.go:141] libmachine: Stopping "embed-certs-509298"...
	I1004 01:42:32.406721  165496 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:42:32.408370  165496 main.go:141] libmachine: (embed-certs-509298) Calling .Stop
	I1004 01:42:32.412308  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 0/60
	I1004 01:42:33.414163  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 1/60
	I1004 01:42:34.416396  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 2/60
	I1004 01:42:35.417807  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 3/60
	I1004 01:42:36.419568  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 4/60
	I1004 01:42:37.421776  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 5/60
	I1004 01:42:38.423304  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 6/60
	I1004 01:42:39.424817  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 7/60
	I1004 01:42:40.426271  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 8/60
	I1004 01:42:41.427611  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 9/60
	I1004 01:42:42.429904  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 10/60
	I1004 01:42:43.431232  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 11/60
	I1004 01:42:44.432718  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 12/60
	I1004 01:42:45.434961  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 13/60
	I1004 01:42:46.437197  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 14/60
	I1004 01:42:47.439858  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 15/60
	I1004 01:42:48.441770  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 16/60
	I1004 01:42:49.443424  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 17/60
	I1004 01:42:50.445177  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 18/60
	I1004 01:42:51.446914  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 19/60
	I1004 01:42:52.448852  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 20/60
	I1004 01:42:53.450726  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 21/60
	I1004 01:42:54.452286  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 22/60
	I1004 01:42:55.453771  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 23/60
	I1004 01:42:56.455147  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 24/60
	I1004 01:42:57.457106  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 25/60
	I1004 01:42:58.458559  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 26/60
	I1004 01:42:59.460006  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 27/60
	I1004 01:43:00.461419  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 28/60
	I1004 01:43:01.463008  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 29/60
	I1004 01:43:02.465522  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 30/60
	I1004 01:43:03.467060  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 31/60
	I1004 01:43:04.468609  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 32/60
	I1004 01:43:05.470352  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 33/60
	I1004 01:43:06.471908  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 34/60
	I1004 01:43:07.473979  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 35/60
	I1004 01:43:08.475303  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 36/60
	I1004 01:43:09.476708  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 37/60
	I1004 01:43:10.479146  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 38/60
	I1004 01:43:11.480438  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 39/60
	I1004 01:43:12.481679  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 40/60
	I1004 01:43:13.483212  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 41/60
	I1004 01:43:14.484482  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 42/60
	I1004 01:43:15.485658  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 43/60
	I1004 01:43:16.487302  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 44/60
	I1004 01:43:17.489151  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 45/60
	I1004 01:43:18.490685  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 46/60
	I1004 01:43:19.493477  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 47/60
	I1004 01:43:20.494867  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 48/60
	I1004 01:43:21.496718  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 49/60
	I1004 01:43:22.498916  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 50/60
	I1004 01:43:23.500599  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 51/60
	I1004 01:43:24.502205  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 52/60
	I1004 01:43:25.503518  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 53/60
	I1004 01:43:26.505035  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 54/60
	I1004 01:43:27.507180  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 55/60
	I1004 01:43:28.508590  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 56/60
	I1004 01:43:29.510025  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 57/60
	I1004 01:43:30.511404  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 58/60
	I1004 01:43:31.512788  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 59/60
	I1004 01:43:32.514155  165496 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:43:32.514232  165496 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:43:32.514253  165496 retry.go:31] will retry after 1.284651028s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:43:33.799682  165496 stop.go:39] StopHost: embed-certs-509298
	I1004 01:43:33.800174  165496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:43:33.800235  165496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:43:33.815413  165496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44275
	I1004 01:43:33.815955  165496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:43:33.816534  165496 main.go:141] libmachine: Using API Version  1
	I1004 01:43:33.816564  165496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:43:33.816911  165496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:43:33.819153  165496 out.go:177] * Stopping node "embed-certs-509298"  ...
	I1004 01:43:33.820786  165496 main.go:141] libmachine: Stopping "embed-certs-509298"...
	I1004 01:43:33.820806  165496 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:43:33.822549  165496 main.go:141] libmachine: (embed-certs-509298) Calling .Stop
	I1004 01:43:33.825832  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 0/60
	I1004 01:43:34.827458  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 1/60
	I1004 01:43:35.828964  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 2/60
	I1004 01:43:36.830643  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 3/60
	I1004 01:43:37.832653  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 4/60
	I1004 01:43:38.834920  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 5/60
	I1004 01:43:39.836403  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 6/60
	I1004 01:43:40.838408  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 7/60
	I1004 01:43:41.840432  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 8/60
	I1004 01:43:42.842130  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 9/60
	I1004 01:43:43.843878  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 10/60
	I1004 01:43:44.845730  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 11/60
	I1004 01:43:45.847110  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 12/60
	I1004 01:43:46.848535  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 13/60
	I1004 01:43:47.849866  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 14/60
	I1004 01:43:48.851552  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 15/60
	I1004 01:43:49.853148  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 16/60
	I1004 01:43:50.854897  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 17/60
	I1004 01:43:51.856945  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 18/60
	I1004 01:43:52.858456  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 19/60
	I1004 01:43:53.860837  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 20/60
	I1004 01:43:54.862510  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 21/60
	I1004 01:43:55.864028  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 22/60
	I1004 01:43:56.865505  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 23/60
	I1004 01:43:57.866935  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 24/60
	I1004 01:43:58.868653  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 25/60
	I1004 01:43:59.870413  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 26/60
	I1004 01:44:00.871779  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 27/60
	I1004 01:44:01.873151  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 28/60
	I1004 01:44:02.874841  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 29/60
	I1004 01:44:03.877221  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 30/60
	I1004 01:44:04.878685  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 31/60
	I1004 01:44:05.880543  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 32/60
	I1004 01:44:06.882340  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 33/60
	I1004 01:44:07.883798  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 34/60
	I1004 01:44:08.885773  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 35/60
	I1004 01:44:09.887226  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 36/60
	I1004 01:44:10.888556  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 37/60
	I1004 01:44:11.890349  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 38/60
	I1004 01:44:12.891781  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 39/60
	I1004 01:44:13.893742  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 40/60
	I1004 01:44:14.895426  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 41/60
	I1004 01:44:15.897009  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 42/60
	I1004 01:44:16.898862  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 43/60
	I1004 01:44:17.900621  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 44/60
	I1004 01:44:18.902728  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 45/60
	I1004 01:44:19.904471  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 46/60
	I1004 01:44:20.906060  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 47/60
	I1004 01:44:21.907651  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 48/60
	I1004 01:44:22.908990  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 49/60
	I1004 01:44:23.911091  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 50/60
	I1004 01:44:24.913076  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 51/60
	I1004 01:44:25.914325  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 52/60
	I1004 01:44:26.915924  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 53/60
	I1004 01:44:27.917475  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 54/60
	I1004 01:44:28.919416  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 55/60
	I1004 01:44:29.921237  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 56/60
	I1004 01:44:30.923068  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 57/60
	I1004 01:44:31.925178  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 58/60
	I1004 01:44:32.926999  165496 main.go:141] libmachine: (embed-certs-509298) Waiting for machine to stop 59/60
	I1004 01:44:33.928191  165496 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:44:33.928253  165496 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:44:34.073872  165496 out.go:177] 
	W1004 01:44:34.137182  165496 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 01:44:34.137243  165496 out.go:239] * 
	* 
	W1004 01:44:34.139514  165496 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 01:44:34.245403  165496 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-509298 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298: exit status 3 (18.466797163s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:52.746338  166643 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host
	E1004 01:44:52.746359  166643 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-509298" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-107182 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-107182 --alsologtostderr -v=3: exit status 82 (2m1.446956587s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-107182"  ...
	* Stopping node "old-k8s-version-107182"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:42:35.018919  165563 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:42:35.019213  165563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:42:35.019222  165563 out.go:309] Setting ErrFile to fd 2...
	I1004 01:42:35.019227  165563 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:42:35.019431  165563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:42:35.019677  165563 out.go:303] Setting JSON to false
	I1004 01:42:35.019761  165563 mustload.go:65] Loading cluster: old-k8s-version-107182
	I1004 01:42:35.020100  165563 config.go:182] Loaded profile config "old-k8s-version-107182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1004 01:42:35.020170  165563 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/config.json ...
	I1004 01:42:35.020335  165563 mustload.go:65] Loading cluster: old-k8s-version-107182
	I1004 01:42:35.020446  165563 config.go:182] Loaded profile config "old-k8s-version-107182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1004 01:42:35.020479  165563 stop.go:39] StopHost: old-k8s-version-107182
	I1004 01:42:35.020933  165563 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:42:35.020971  165563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:42:35.035931  165563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I1004 01:42:35.036480  165563 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:42:35.037121  165563 main.go:141] libmachine: Using API Version  1
	I1004 01:42:35.037146  165563 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:42:35.037479  165563 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:42:35.040026  165563 out.go:177] * Stopping node "old-k8s-version-107182"  ...
	I1004 01:42:35.042418  165563 main.go:141] libmachine: Stopping "old-k8s-version-107182"...
	I1004 01:42:35.042435  165563 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:42:35.044303  165563 main.go:141] libmachine: (old-k8s-version-107182) Calling .Stop
	I1004 01:42:35.048149  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 0/60
	I1004 01:42:36.050312  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 1/60
	I1004 01:42:37.052521  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 2/60
	I1004 01:42:38.054087  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 3/60
	I1004 01:42:39.055687  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 4/60
	I1004 01:42:40.057981  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 5/60
	I1004 01:42:41.059581  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 6/60
	I1004 01:42:42.060974  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 7/60
	I1004 01:42:43.062815  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 8/60
	I1004 01:42:44.064425  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 9/60
	I1004 01:42:45.066088  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 10/60
	I1004 01:42:46.067475  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 11/60
	I1004 01:42:47.069299  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 12/60
	I1004 01:42:48.070678  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 13/60
	I1004 01:42:49.072275  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 14/60
	I1004 01:42:50.074385  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 15/60
	I1004 01:42:51.075855  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 16/60
	I1004 01:42:52.315222  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 17/60
	I1004 01:42:53.316933  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 18/60
	I1004 01:42:54.318398  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 19/60
	I1004 01:42:55.320514  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 20/60
	I1004 01:42:56.321994  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 21/60
	I1004 01:42:57.323266  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 22/60
	I1004 01:42:58.324839  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 23/60
	I1004 01:42:59.326206  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 24/60
	I1004 01:43:00.328480  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 25/60
	I1004 01:43:01.329853  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 26/60
	I1004 01:43:02.331532  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 27/60
	I1004 01:43:03.333231  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 28/60
	I1004 01:43:04.334765  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 29/60
	I1004 01:43:05.337341  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 30/60
	I1004 01:43:06.339012  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 31/60
	I1004 01:43:07.340650  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 32/60
	I1004 01:43:08.342237  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 33/60
	I1004 01:43:09.343494  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 34/60
	I1004 01:43:10.345316  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 35/60
	I1004 01:43:11.346635  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 36/60
	I1004 01:43:12.348024  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 37/60
	I1004 01:43:13.349310  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 38/60
	I1004 01:43:14.350542  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 39/60
	I1004 01:43:15.352846  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 40/60
	I1004 01:43:16.354421  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 41/60
	I1004 01:43:17.355904  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 42/60
	I1004 01:43:18.357557  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 43/60
	I1004 01:43:19.359179  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 44/60
	I1004 01:43:20.360804  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 45/60
	I1004 01:43:21.362390  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 46/60
	I1004 01:43:22.363904  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 47/60
	I1004 01:43:23.365226  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 48/60
	I1004 01:43:24.366734  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 49/60
	I1004 01:43:25.368862  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 50/60
	I1004 01:43:26.370468  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 51/60
	I1004 01:43:27.371939  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 52/60
	I1004 01:43:28.373343  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 53/60
	I1004 01:43:29.374940  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 54/60
	I1004 01:43:30.377095  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 55/60
	I1004 01:43:31.378561  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 56/60
	I1004 01:43:32.380045  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 57/60
	I1004 01:43:33.381620  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 58/60
	I1004 01:43:34.383179  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 59/60
	I1004 01:43:35.384355  165563 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:43:35.384414  165563 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:43:35.384446  165563 retry.go:31] will retry after 899.193389ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:43:36.284489  165563 stop.go:39] StopHost: old-k8s-version-107182
	I1004 01:43:36.284885  165563 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:43:36.284935  165563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:43:36.301193  165563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46401
	I1004 01:43:36.301704  165563 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:43:36.302712  165563 main.go:141] libmachine: Using API Version  1
	I1004 01:43:36.302738  165563 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:43:36.303076  165563 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:43:36.304959  165563 out.go:177] * Stopping node "old-k8s-version-107182"  ...
	I1004 01:43:36.306380  165563 main.go:141] libmachine: Stopping "old-k8s-version-107182"...
	I1004 01:43:36.306400  165563 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:43:36.308359  165563 main.go:141] libmachine: (old-k8s-version-107182) Calling .Stop
	I1004 01:43:36.312478  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 0/60
	I1004 01:43:37.313943  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 1/60
	I1004 01:43:38.315535  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 2/60
	I1004 01:43:39.316791  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 3/60
	I1004 01:43:40.319347  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 4/60
	I1004 01:43:41.320847  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 5/60
	I1004 01:43:42.322297  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 6/60
	I1004 01:43:43.323605  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 7/60
	I1004 01:43:44.324883  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 8/60
	I1004 01:43:45.326298  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 9/60
	I1004 01:43:46.328442  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 10/60
	I1004 01:43:47.329789  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 11/60
	I1004 01:43:48.331077  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 12/60
	I1004 01:43:49.332410  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 13/60
	I1004 01:43:50.334471  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 14/60
	I1004 01:43:51.335963  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 15/60
	I1004 01:43:52.337427  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 16/60
	I1004 01:43:53.338902  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 17/60
	I1004 01:43:54.340472  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 18/60
	I1004 01:43:55.341832  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 19/60
	I1004 01:43:56.343326  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 20/60
	I1004 01:43:57.345600  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 21/60
	I1004 01:43:58.347092  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 22/60
	I1004 01:43:59.348560  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 23/60
	I1004 01:44:00.350002  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 24/60
	I1004 01:44:01.351883  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 25/60
	I1004 01:44:02.353307  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 26/60
	I1004 01:44:03.354732  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 27/60
	I1004 01:44:04.356092  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 28/60
	I1004 01:44:05.357759  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 29/60
	I1004 01:44:06.360216  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 30/60
	I1004 01:44:07.361620  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 31/60
	I1004 01:44:08.363156  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 32/60
	I1004 01:44:09.364688  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 33/60
	I1004 01:44:10.366283  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 34/60
	I1004 01:44:11.368321  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 35/60
	I1004 01:44:12.369679  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 36/60
	I1004 01:44:13.372119  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 37/60
	I1004 01:44:14.373691  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 38/60
	I1004 01:44:15.375380  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 39/60
	I1004 01:44:16.377334  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 40/60
	I1004 01:44:17.379147  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 41/60
	I1004 01:44:18.380601  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 42/60
	I1004 01:44:19.382274  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 43/60
	I1004 01:44:20.383988  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 44/60
	I1004 01:44:21.386086  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 45/60
	I1004 01:44:22.388247  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 46/60
	I1004 01:44:23.389753  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 47/60
	I1004 01:44:24.391025  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 48/60
	I1004 01:44:25.392728  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 49/60
	I1004 01:44:26.394777  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 50/60
	I1004 01:44:27.396508  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 51/60
	I1004 01:44:28.398207  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 52/60
	I1004 01:44:29.399947  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 53/60
	I1004 01:44:30.401657  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 54/60
	I1004 01:44:31.403605  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 55/60
	I1004 01:44:32.404941  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 56/60
	I1004 01:44:33.406500  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 57/60
	I1004 01:44:34.408129  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 58/60
	I1004 01:44:35.409740  165563 main.go:141] libmachine: (old-k8s-version-107182) Waiting for machine to stop 59/60
	I1004 01:44:36.410636  165563 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:44:36.410682  165563 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:44:36.412850  165563 out.go:177] 
	W1004 01:44:36.414424  165563 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 01:44:36.414438  165563 out.go:239] * 
	* 
	W1004 01:44:36.416751  165563 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 01:44:36.418095  165563 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-107182 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182: exit status 3 (18.629341031s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:55.050173  166673 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E1004 01:44:55.050193  166673 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-107182" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516: exit status 3 (3.167120767s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:32.618219  166541 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host
	E1004 01:44:32.618247  166541 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-273516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-273516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152984361s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-273516 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516: exit status 3 (3.062395265s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:41.834225  166714 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host
	E1004 01:44:41.834251  166714 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.83.165:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-273516" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298: exit status 3 (3.167985846s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:55.914267  166873 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host
	E1004 01:44:55.914293  166873 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-509298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-509298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152344107s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-509298 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298: exit status 3 (3.063517526s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:45:05.130233  167392 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host
	E1004 01:45:05.130252  167392 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.170:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-509298" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182: exit status 3 (3.16757847s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:44:58.218167  166904 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E1004 01:44:58.218186  166904 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-107182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-107182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152884645s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-107182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182: exit status 3 (3.063334702s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:45:07.434241  167422 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host
	E1004 01:45:07.434262  167422 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.182:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-107182" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-239802 --alsologtostderr -v=3
E1004 01:51:05.194695  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-239802 --alsologtostderr -v=3: exit status 82 (2m1.716980319s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-239802"  ...
	* Stopping node "default-k8s-diff-port-239802"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:50:39.311583  168880 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:50:39.311879  168880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:50:39.311926  168880 out.go:309] Setting ErrFile to fd 2...
	I1004 01:50:39.311944  168880 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:50:39.312286  168880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:50:39.312690  168880 out.go:303] Setting JSON to false
	I1004 01:50:39.312854  168880 mustload.go:65] Loading cluster: default-k8s-diff-port-239802
	I1004 01:50:39.313372  168880 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:50:39.313473  168880 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:50:39.313675  168880 mustload.go:65] Loading cluster: default-k8s-diff-port-239802
	I1004 01:50:39.313816  168880 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:50:39.313873  168880 stop.go:39] StopHost: default-k8s-diff-port-239802
	I1004 01:50:39.314373  168880 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:50:39.314440  168880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:50:39.336782  168880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35985
	I1004 01:50:39.338090  168880 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:50:39.338973  168880 main.go:141] libmachine: Using API Version  1
	I1004 01:50:39.339002  168880 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:50:39.339616  168880 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:50:39.342123  168880 out.go:177] * Stopping node "default-k8s-diff-port-239802"  ...
	I1004 01:50:39.343655  168880 main.go:141] libmachine: Stopping "default-k8s-diff-port-239802"...
	I1004 01:50:39.343718  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:50:39.345969  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Stop
	I1004 01:50:39.350666  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 0/60
	I1004 01:50:40.352439  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 1/60
	I1004 01:50:41.353826  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 2/60
	I1004 01:50:42.355885  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 3/60
	I1004 01:50:43.357175  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 4/60
	I1004 01:50:44.359441  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 5/60
	I1004 01:50:45.360976  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 6/60
	I1004 01:50:46.362315  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 7/60
	I1004 01:50:47.364536  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 8/60
	I1004 01:50:48.366029  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 9/60
	I1004 01:50:49.368363  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 10/60
	I1004 01:50:50.369892  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 11/60
	I1004 01:50:51.371471  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 12/60
	I1004 01:50:52.373658  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 13/60
	I1004 01:50:53.375766  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 14/60
	I1004 01:50:54.378330  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 15/60
	I1004 01:50:55.380486  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 16/60
	I1004 01:50:56.382648  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 17/60
	I1004 01:50:57.384199  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 18/60
	I1004 01:50:58.385641  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 19/60
	I1004 01:50:59.388162  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 20/60
	I1004 01:51:00.389862  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 21/60
	I1004 01:51:01.391297  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 22/60
	I1004 01:51:02.393438  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 23/60
	I1004 01:51:03.394961  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 24/60
	I1004 01:51:04.396864  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 25/60
	I1004 01:51:05.398320  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 26/60
	I1004 01:51:06.399841  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 27/60
	I1004 01:51:07.401487  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 28/60
	I1004 01:51:08.402962  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 29/60
	I1004 01:51:09.405333  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 30/60
	I1004 01:51:10.406715  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 31/60
	I1004 01:51:11.408098  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 32/60
	I1004 01:51:12.410352  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 33/60
	I1004 01:51:13.411699  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 34/60
	I1004 01:51:14.413593  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 35/60
	I1004 01:51:15.415056  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 36/60
	I1004 01:51:16.417057  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 37/60
	I1004 01:51:17.418457  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 38/60
	I1004 01:51:18.420353  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 39/60
	I1004 01:51:19.422450  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 40/60
	I1004 01:51:20.424128  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 41/60
	I1004 01:51:21.425615  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 42/60
	I1004 01:51:22.426965  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 43/60
	I1004 01:51:23.428903  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 44/60
	I1004 01:51:24.431162  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 45/60
	I1004 01:51:25.432566  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 46/60
	I1004 01:51:26.434186  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 47/60
	I1004 01:51:27.435659  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 48/60
	I1004 01:51:28.436973  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 49/60
	I1004 01:51:29.439331  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 50/60
	I1004 01:51:30.441079  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 51/60
	I1004 01:51:31.442746  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 52/60
	I1004 01:51:32.444702  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 53/60
	I1004 01:51:33.446530  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 54/60
	I1004 01:51:34.448888  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 55/60
	I1004 01:51:35.450398  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 56/60
	I1004 01:51:36.451799  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 57/60
	I1004 01:51:37.453274  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 58/60
	I1004 01:51:38.454603  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 59/60
	I1004 01:51:39.455985  168880 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:51:39.456067  168880 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:51:39.456092  168880 retry.go:31] will retry after 1.369453003s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:51:40.826636  168880 stop.go:39] StopHost: default-k8s-diff-port-239802
	I1004 01:51:40.827129  168880 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:51:40.827178  168880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:51:40.843000  168880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44941
	I1004 01:51:40.843507  168880 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:51:40.844171  168880 main.go:141] libmachine: Using API Version  1
	I1004 01:51:40.844206  168880 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:51:40.844565  168880 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:51:40.846529  168880 out.go:177] * Stopping node "default-k8s-diff-port-239802"  ...
	I1004 01:51:40.848156  168880 main.go:141] libmachine: Stopping "default-k8s-diff-port-239802"...
	I1004 01:51:40.848191  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:51:40.849695  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Stop
	I1004 01:51:40.853092  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 0/60
	I1004 01:51:41.854502  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 1/60
	I1004 01:51:42.856725  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 2/60
	I1004 01:51:43.858271  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 3/60
	I1004 01:51:44.860256  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 4/60
	I1004 01:51:45.862683  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 5/60
	I1004 01:51:46.864484  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 6/60
	I1004 01:51:47.865916  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 7/60
	I1004 01:51:48.867274  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 8/60
	I1004 01:51:49.869255  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 9/60
	I1004 01:51:50.871139  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 10/60
	I1004 01:51:51.873038  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 11/60
	I1004 01:51:52.874569  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 12/60
	I1004 01:51:53.876977  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 13/60
	I1004 01:51:54.878523  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 14/60
	I1004 01:51:55.880431  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 15/60
	I1004 01:51:56.882562  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 16/60
	I1004 01:51:57.884400  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 17/60
	I1004 01:51:58.885673  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 18/60
	I1004 01:51:59.887162  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 19/60
	I1004 01:52:00.889300  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 20/60
	I1004 01:52:01.891699  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 21/60
	I1004 01:52:02.893046  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 22/60
	I1004 01:52:03.894703  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 23/60
	I1004 01:52:04.896617  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 24/60
	I1004 01:52:05.898688  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 25/60
	I1004 01:52:06.900422  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 26/60
	I1004 01:52:07.901953  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 27/60
	I1004 01:52:08.903159  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 28/60
	I1004 01:52:09.905649  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 29/60
	I1004 01:52:10.907893  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 30/60
	I1004 01:52:11.909956  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 31/60
	I1004 01:52:12.911398  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 32/60
	I1004 01:52:13.912887  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 33/60
	I1004 01:52:14.914587  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 34/60
	I1004 01:52:15.917006  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 35/60
	I1004 01:52:16.918373  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 36/60
	I1004 01:52:17.920377  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 37/60
	I1004 01:52:18.921927  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 38/60
	I1004 01:52:19.923369  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 39/60
	I1004 01:52:20.925306  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 40/60
	I1004 01:52:21.926759  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 41/60
	I1004 01:52:22.928628  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 42/60
	I1004 01:52:23.930071  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 43/60
	I1004 01:52:24.931669  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 44/60
	I1004 01:52:25.933369  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 45/60
	I1004 01:52:26.934759  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 46/60
	I1004 01:52:27.936252  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 47/60
	I1004 01:52:28.937570  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 48/60
	I1004 01:52:29.938883  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 49/60
	I1004 01:52:30.940666  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 50/60
	I1004 01:52:31.941977  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 51/60
	I1004 01:52:32.944446  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 52/60
	I1004 01:52:33.945900  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 53/60
	I1004 01:52:34.947285  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 54/60
	I1004 01:52:35.948862  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 55/60
	I1004 01:52:36.950616  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 56/60
	I1004 01:52:37.952312  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 57/60
	I1004 01:52:38.954490  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 58/60
	I1004 01:52:39.955769  168880 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for machine to stop 59/60
	I1004 01:52:40.956753  168880 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1004 01:52:40.956813  168880 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1004 01:52:40.958683  168880 out.go:177] 
	W1004 01:52:40.960147  168880 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1004 01:52:40.960164  168880 out.go:239] * 
	* 
	W1004 01:52:40.962482  168880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1004 01:52:40.963792  168880 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-239802 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
E1004 01:52:58.426717  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802: exit status 3 (18.436628072s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:52:59.402190  169339 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host
	E1004 01:52:59.402212  169339 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-239802" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802: exit status 3 (3.167868541s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:53:02.570259  169414 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host
	E1004 01:53:02.570281  169414 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-239802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-239802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156325516s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-239802 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802: exit status 3 (3.063620155s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1004 01:53:11.790263  169485 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host
	E1004 01:53:11.790295  169485 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.105:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-239802" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1004 01:55:33.291104  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-509298 -n embed-certs-509298
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:04:30.80675224 +0000 UTC m=+4863.177783262
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-509298 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-509298 logs -n 25: (1.368162118s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-107182        | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-528457                              | cert-expiration-528457       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-554732 | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | disable-driver-mounts-554732                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-487861             | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-487861                  | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273516                  | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-487861 sudo                              | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-509298                 | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| addons  | enable dashboard -p old-k8s-version-107182             | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:50 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-509298                                  | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-239802  | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC | 04 Oct 23 01:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC |                     |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-239802       | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC | 04 Oct 23 02:03 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:53:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:53:11.828274  169515 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:53:11.828536  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828547  169515 out.go:309] Setting ErrFile to fd 2...
	I1004 01:53:11.828552  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828768  169515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:53:11.829347  169515 out.go:303] Setting JSON to false
	I1004 01:53:11.830376  169515 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9343,"bootTime":1696375049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:53:11.830441  169515 start.go:138] virtualization: kvm guest
	I1004 01:53:11.832711  169515 out.go:177] * [default-k8s-diff-port-239802] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:53:11.834324  169515 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:53:11.835643  169515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:53:11.834361  169515 notify.go:220] Checking for updates...
	I1004 01:53:11.838217  169515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:53:11.839555  169515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:53:11.840846  169515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:53:11.842161  169515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:53:07.280681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:09.778282  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.779681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.843761  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:53:11.844277  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.844360  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.860250  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I1004 01:53:11.860700  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.861256  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.861279  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.861643  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.861866  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.862175  169515 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:53:11.862447  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.862487  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.877262  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1004 01:53:11.877711  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.878333  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.878357  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.878806  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.879014  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.917299  169515 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:53:11.918706  169515 start.go:298] selected driver: kvm2
	I1004 01:53:11.918721  169515 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.918831  169515 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:53:11.919435  169515 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.919506  169515 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:53:11.934986  169515 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:53:11.935329  169515 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:53:11.935365  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:53:11.935379  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:53:11.935399  169515 start_flags.go:321] config:
	{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-23980
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.935580  169515 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.937595  169515 out.go:177] * Starting control plane node default-k8s-diff-port-239802 in cluster default-k8s-diff-port-239802
	I1004 01:53:11.938856  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:53:11.938906  169515 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:53:11.938918  169515 cache.go:57] Caching tarball of preloaded images
	I1004 01:53:11.939005  169515 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:53:11.939019  169515 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:53:11.939123  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:53:11.939343  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:53:11.939424  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 58.221µs
	I1004 01:53:11.939444  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:53:11.939453  169515 fix.go:54] fixHost starting: 
	I1004 01:53:11.939742  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.939789  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.954196  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I1004 01:53:11.954631  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.955177  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.955207  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.955546  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.955732  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.955907  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:53:11.957727  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Running err=<nil>
	W1004 01:53:11.957752  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:53:11.959786  169515 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-239802" VM ...
	I1004 01:53:08.669530  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.168697  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:10.723754  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.223290  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.960962  169515 machine.go:88] provisioning docker machine ...
	I1004 01:53:11.960980  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.961165  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961309  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:53:11.961321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961451  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:53:11.964100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964548  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:49:35 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:53:11.964579  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964700  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:53:11.964891  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965213  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:53:11.965415  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:53:11.965918  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:53:11.965942  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:53:14.858205  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:13.780979  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:16.279971  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.170120  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.170376  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.724119  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:18.223219  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.930132  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:18.779188  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.668906  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:19.669782  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:22.169918  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.724642  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:23.225475  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.010157  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:23.279668  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.778425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.668233  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:26.669315  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.723231  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:28.222973  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:27.082190  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:27.778573  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.779483  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.168734  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:31.169219  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:30.223870  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:32.724030  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.162101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:36.234078  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:32.278768  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:34.279611  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:36.779455  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.669109  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.669923  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.224564  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.723997  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:39.724578  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:38.779567  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:41.278736  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.671432  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:40.168863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.168970  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.223844  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.224215  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.358317  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:43.278799  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.279544  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.169371  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.670033  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.726544  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.222631  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.426196  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:47.282389  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.779291  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.673161  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.170963  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.223796  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.724046  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.506087  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:52.280232  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.778941  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.668512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:55.668997  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:56.223812  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.223985  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:57.578187  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:57.281468  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:59.780369  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.169361  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.171086  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.723767  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.724182  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:03.658082  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:06.730171  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:02.278547  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:04.279504  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:06.779458  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.669174  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.169089  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.224336  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.724614  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:08.780155  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:11.281399  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:09.670536  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.170645  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:10.223678  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.724096  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.810084  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:15.882179  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:13.780199  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.280077  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:14.668216  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.668736  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:15.223755  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:17.223789  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:19.724040  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.780554  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.283185  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.672583  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.169626  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:22.223220  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:24.223653  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.962094  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:25.034104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:23.779529  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:25.785001  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:23.668523  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.170080  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.725426  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:29.224292  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.114102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:28.278824  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.280812  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:28.668973  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.669813  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.724077  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.223673  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.186185  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:32.283313  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.785440  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:33.169511  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:35.170079  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:36.223744  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:38.223824  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.270113  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:37.279625  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:39.779646  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:37.670022  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.170303  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.723833  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.723858  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.723974  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:43.338083  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:42.281698  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.778204  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.779425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.668686  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.671405  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:47.170837  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.418200  167452 pod_ready.go:81] duration metric: took 4m0.000746433s waiting for pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace to be "Ready" ...
	E1004 01:54:46.418242  167452 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:54:46.418266  167452 pod_ready.go:38] duration metric: took 4m6.792871015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:54:46.418310  167452 kubeadm.go:640] restartCluster took 4m30.137827083s
	W1004 01:54:46.418446  167452 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:54:46.418484  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:54:49.418125  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:48.780239  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.284905  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:49.174919  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.675479  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:52.490104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:53.778907  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:55.778958  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:54.169521  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:56.670982  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:58.570115  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:01.642220  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:57.779481  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.782476  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.170012  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:01.670386  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:00.372786  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.954218871s)
	I1004 01:55:00.372881  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:00.387256  167452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:55:00.396756  167452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:55:00.406765  167452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:55:00.406806  167452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 01:55:00.625971  167452 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:55:02.279852  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.281525  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.779641  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.170863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.671473  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:07.722109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:10.794061  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:08.780879  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.283040  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.183572  167452 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 01:55:12.183661  167452 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:55:12.183766  167452 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:55:12.183877  167452 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:55:12.183978  167452 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:55:12.184074  167452 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:55:12.185782  167452 out.go:204]   - Generating certificates and keys ...
	I1004 01:55:12.185896  167452 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:55:12.185952  167452 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:55:12.186040  167452 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:55:12.186118  167452 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:55:12.186210  167452 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:55:12.186309  167452 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:55:12.186400  167452 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:55:12.186483  167452 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:55:12.186608  167452 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:55:12.186728  167452 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:55:12.186790  167452 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:55:12.186869  167452 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:55:12.186944  167452 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:55:12.187022  167452 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:55:12.187094  167452 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:55:12.187174  167452 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:55:12.187302  167452 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:55:12.187369  167452 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:55:12.188941  167452 out.go:204]   - Booting up control plane ...
	I1004 01:55:12.189059  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:55:12.189132  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:55:12.189211  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:55:12.189324  167452 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:55:12.189452  167452 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:55:12.189504  167452 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 01:55:12.189735  167452 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:55:12.189877  167452 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004191 seconds
	I1004 01:55:12.190030  167452 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:55:12.190218  167452 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:55:12.190314  167452 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:55:12.190566  167452 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-509298 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 01:55:12.190670  167452 kubeadm.go:322] [bootstrap-token] Using token: i6ebw8.csx7j4uz10ltteg7
	I1004 01:55:12.192239  167452 out.go:204]   - Configuring RBAC rules ...
	I1004 01:55:12.192387  167452 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:55:12.192462  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 01:55:12.192608  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:55:12.192774  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:55:12.192904  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:55:12.192996  167452 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:55:12.193138  167452 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 01:55:12.193211  167452 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:55:12.193271  167452 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:55:12.193278  167452 kubeadm.go:322] 
	I1004 01:55:12.193325  167452 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:55:12.193332  167452 kubeadm.go:322] 
	I1004 01:55:12.193398  167452 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:55:12.193404  167452 kubeadm.go:322] 
	I1004 01:55:12.193424  167452 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:55:12.193475  167452 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:55:12.193517  167452 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:55:12.193523  167452 kubeadm.go:322] 
	I1004 01:55:12.193565  167452 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 01:55:12.193571  167452 kubeadm.go:322] 
	I1004 01:55:12.193628  167452 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 01:55:12.193638  167452 kubeadm.go:322] 
	I1004 01:55:12.193704  167452 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:55:12.193783  167452 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:55:12.193895  167452 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:55:12.193906  167452 kubeadm.go:322] 
	I1004 01:55:12.194003  167452 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 01:55:12.194073  167452 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:55:12.194080  167452 kubeadm.go:322] 
	I1004 01:55:12.194169  167452 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194254  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:55:12.194273  167452 kubeadm.go:322] 	--control-plane 
	I1004 01:55:12.194279  167452 kubeadm.go:322] 
	I1004 01:55:12.194352  167452 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:55:12.194360  167452 kubeadm.go:322] 
	I1004 01:55:12.194428  167452 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194540  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:55:12.194563  167452 cni.go:84] Creating CNI manager for ""
	I1004 01:55:12.194572  167452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:55:12.196296  167452 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:55:09.172018  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.670011  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.197574  167452 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:55:12.219217  167452 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:55:12.298578  167452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:55:12.298671  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.298685  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=embed-certs-509298 minikube.k8s.io/updated_at=2023_10_04T01_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.379573  167452 ops.go:34] apiserver oom_adj: -16
	I1004 01:55:12.664606  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.821682  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.427770  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.928385  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.428534  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.927827  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.780253  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.286195  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:14.169232  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.669256  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:15.428102  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:15.928404  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.428316  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.928095  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.428581  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.928158  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.428061  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.927815  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.428285  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.927597  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.874102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:19.946137  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:18.779212  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.780120  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:18.671773  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:21.169373  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.428231  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:20.927662  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.427644  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.927803  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.427969  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.928321  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.428088  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.928382  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.427968  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.686625  167452 kubeadm.go:1081] duration metric: took 12.388021854s to wait for elevateKubeSystemPrivileges.
	I1004 01:55:24.686650  167452 kubeadm.go:406] StartCluster complete in 5m8.467148399s
	I1004 01:55:24.686670  167452 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.686772  167452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:55:24.689005  167452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.691164  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:55:24.691505  167452 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:55:24.691524  167452 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:55:24.691609  167452 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-509298"
	I1004 01:55:24.691645  167452 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-509298"
	W1004 01:55:24.691666  167452 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:55:24.691681  167452 addons.go:69] Setting default-storageclass=true in profile "embed-certs-509298"
	I1004 01:55:24.691711  167452 addons.go:69] Setting metrics-server=true in profile "embed-certs-509298"
	I1004 01:55:24.691721  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.691750  167452 addons.go:231] Setting addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:24.691713  167452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-509298"
	W1004 01:55:24.691763  167452 addons.go:240] addon metrics-server should already be in state true
	I1004 01:55:24.692075  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692471  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692522  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692566  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692591  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.710712  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1004 01:55:24.711360  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.711863  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1004 01:55:24.712115  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712145  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.712236  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.712668  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.712925  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712950  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.713327  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.713364  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713391  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.713880  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713918  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.715208  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I1004 01:55:24.715594  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.716155  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.716185  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.716523  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.716732  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.720408  167452 addons.go:231] Setting addon default-storageclass=true in "embed-certs-509298"
	W1004 01:55:24.720590  167452 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:55:24.720630  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.720922  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.720963  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.731384  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1004 01:55:24.732142  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.732918  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.732946  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.733348  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.733666  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I1004 01:55:24.733699  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.734163  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.734711  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.734737  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.735163  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.735400  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.735991  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.738353  167452 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:55:24.740203  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:55:24.740222  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:55:24.737643  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.740244  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.742072  167452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:55:24.743597  167452 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:24.743626  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:55:24.743648  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.744536  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745006  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.745048  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745279  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.745519  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.745719  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.745878  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.748789  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.748842  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I1004 01:55:24.749267  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.749298  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.749354  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.749818  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.749892  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.749978  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.750177  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.750270  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.750325  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.750752  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.750802  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.751018  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.768787  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I1004 01:55:24.769394  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.770412  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.770438  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.770803  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.770982  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.772831  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.773101  167452 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.773120  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:55:24.773138  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.776980  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777337  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.777390  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777623  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.777827  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.778030  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.778218  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.827144  167452 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-509298" context rescaled to 1 replicas
	I1004 01:55:24.827188  167452 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.170 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:55:24.829039  167452 out.go:177] * Verifying Kubernetes components...
	I1004 01:55:24.830422  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:24.912112  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:55:24.912145  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:55:24.941943  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.953635  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:55:24.953669  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:55:24.964038  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:25.010973  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.011004  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:55:25.069236  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.073447  167452 node_ready.go:35] waiting up to 6m0s for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.073533  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:55:26.026178  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:23.280683  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.280934  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.276517  167452 node_ready.go:49] node "embed-certs-509298" has status "Ready":"True"
	I1004 01:55:25.276548  167452 node_ready.go:38] duration metric: took 203.068295ms waiting for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.276561  167452 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:25.459727  167452 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:26.648518  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706528042s)
	I1004 01:55:26.648633  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.648655  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.648984  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649002  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.649012  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.649021  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.649326  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:26.649367  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649378  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.670495  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.670520  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.670831  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.670890  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318331  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.35425456s)
	I1004 01:55:27.318392  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318407  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318442  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.249161738s)
	I1004 01:55:27.318496  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318502  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244935012s)
	I1004 01:55:27.318516  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318526  167452 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 01:55:27.318839  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318886  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318904  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318915  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318934  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318944  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318946  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318966  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318980  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318993  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.319203  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319225  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319232  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319242  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.319257  167452 addons.go:467] Verifying addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:27.319290  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319300  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.321408  167452 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1004 01:55:23.171045  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.171137  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.323360  167452 addons.go:502] enable addons completed in 2.631835233s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1004 01:55:27.504611  167452 pod_ready.go:102] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:28.987732  167452 pod_ready.go:92] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.987757  167452 pod_ready.go:81] duration metric: took 3.527990687s waiting for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.987769  167452 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993933  167452 pod_ready.go:92] pod "etcd-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.993953  167452 pod_ready.go:81] duration metric: took 6.17579ms waiting for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993966  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000725  167452 pod_ready.go:92] pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.000747  167452 pod_ready.go:81] duration metric: took 6.77205ms waiting for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000759  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005757  167452 pod_ready.go:92] pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.005779  167452 pod_ready.go:81] duration metric: took 5.011182ms waiting for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005790  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010519  167452 pod_ready.go:92] pod "kube-proxy-f99th" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.010537  167452 pod_ready.go:81] duration metric: took 4.738537ms waiting for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010548  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383772  167452 pod_ready.go:92] pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.383795  167452 pod_ready.go:81] duration metric: took 373.240101ms waiting for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383803  167452 pod_ready.go:38] duration metric: took 4.107228637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:29.383834  167452 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:29.383882  167452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:29.399227  167452 api_server.go:72] duration metric: took 4.572006648s to wait for apiserver process to appear ...
	I1004 01:55:29.399259  167452 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:29.399279  167452 api_server.go:253] Checking apiserver healthz at https://192.168.50.170:8443/healthz ...
	I1004 01:55:29.405336  167452 api_server.go:279] https://192.168.50.170:8443/healthz returned 200:
	ok
	I1004 01:55:29.406768  167452 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:29.406794  167452 api_server.go:131] duration metric: took 7.526875ms to wait for apiserver health ...
	I1004 01:55:29.406804  167452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:29.586194  167452 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:29.586225  167452 system_pods.go:61] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.586230  167452 system_pods.go:61] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.586236  167452 system_pods.go:61] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.586241  167452 system_pods.go:61] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.586248  167452 system_pods.go:61] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.586253  167452 system_pods.go:61] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.586261  167452 system_pods.go:61] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.586269  167452 system_pods.go:61] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.586276  167452 system_pods.go:74] duration metric: took 179.466307ms to wait for pod list to return data ...
	I1004 01:55:29.586289  167452 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:29.782372  167452 default_sa.go:45] found service account: "default"
	I1004 01:55:29.782395  167452 default_sa.go:55] duration metric: took 196.098004ms for default service account to be created ...
	I1004 01:55:29.782403  167452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:29.988230  167452 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:29.988261  167452 system_pods.go:89] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.988267  167452 system_pods.go:89] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.988271  167452 system_pods.go:89] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.988276  167452 system_pods.go:89] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.988281  167452 system_pods.go:89] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.988285  167452 system_pods.go:89] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.988298  167452 system_pods.go:89] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.988305  167452 system_pods.go:89] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.988313  167452 system_pods.go:126] duration metric: took 205.9045ms to wait for k8s-apps to be running ...
	I1004 01:55:29.988323  167452 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:29.988369  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:30.003487  167452 system_svc.go:56] duration metric: took 15.153598ms WaitForService to wait for kubelet.
	I1004 01:55:30.003513  167452 kubeadm.go:581] duration metric: took 5.176299768s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:30.003534  167452 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:30.184152  167452 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:30.184177  167452 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:30.184186  167452 node_conditions.go:105] duration metric: took 180.648418ms to run NodePressure ...
	I1004 01:55:30.184198  167452 start.go:228] waiting for startup goroutines ...
	I1004 01:55:30.184204  167452 start.go:233] waiting for cluster config update ...
	I1004 01:55:30.184213  167452 start.go:242] writing updated cluster config ...
	I1004 01:55:30.184486  167452 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:30.233803  167452 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:30.235636  167452 out.go:177] * Done! kubectl is now configured to use "embed-certs-509298" cluster and "default" namespace by default
	I1004 01:55:29.098156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:27.779362  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.779502  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:31.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.670021  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.678512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:32.172222  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:35.178103  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:34.279433  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:36.781532  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:34.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:37.170113  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:38.254127  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:39.278584  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.279085  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:39.668721  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.670095  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:44.330119  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:43.780710  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:45.782354  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.472905  166755 pod_ready.go:81] duration metric: took 4m0.000518679s waiting for pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:46.472936  166755 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:46.472946  166755 pod_ready.go:38] duration metric: took 4m5.201194434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:46.472975  166755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:46.473020  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:46.473075  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:46.533201  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:46.533233  166755 cri.go:89] found id: ""
	I1004 01:55:46.533243  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:46.533304  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.538613  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:46.538673  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:46.580801  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:46.580826  166755 cri.go:89] found id: ""
	I1004 01:55:46.580834  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:46.580896  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.586423  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:46.586510  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:46.645487  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:46.645526  166755 cri.go:89] found id: ""
	I1004 01:55:46.645535  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:46.645618  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.650643  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:46.650719  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:46.693457  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:46.693482  166755 cri.go:89] found id: ""
	I1004 01:55:46.693492  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:46.693553  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.698463  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:46.698538  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:46.744251  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:46.744279  166755 cri.go:89] found id: ""
	I1004 01:55:46.744289  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:46.744353  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.749343  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:46.749419  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:46.792717  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:46.792745  166755 cri.go:89] found id: ""
	I1004 01:55:46.792755  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:46.792820  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.797417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:46.797492  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:46.843004  166755 cri.go:89] found id: ""
	I1004 01:55:46.843033  166755 logs.go:284] 0 containers: []
	W1004 01:55:46.843044  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:46.843051  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:46.843114  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:44.169475  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.171848  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:47.402086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:46.883372  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:46.883397  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.883405  166755 cri.go:89] found id: ""
	I1004 01:55:46.883415  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:46.883476  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.888350  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.892981  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:46.893010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.936801  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:46.936829  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:46.983092  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:46.983124  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:46.997604  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:46.997634  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:47.041461  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:47.041500  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:47.098192  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:47.098234  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:47.139982  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:47.140010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:47.184753  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:47.184789  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:47.242417  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:47.242456  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:47.290664  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:47.290696  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:47.332998  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:47.333035  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:47.779448  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:47.779490  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:47.951031  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:47.951067  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.505155  166755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:50.522774  166755 api_server.go:72] duration metric: took 4m16.635946913s to wait for apiserver process to appear ...
	I1004 01:55:50.522804  166755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:50.522848  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:50.522929  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:50.565196  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.565220  166755 cri.go:89] found id: ""
	I1004 01:55:50.565232  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:50.565288  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.569426  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:50.569488  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:50.608113  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:50.608138  166755 cri.go:89] found id: ""
	I1004 01:55:50.608147  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:50.608194  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.612671  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:50.612730  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:50.659777  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.659806  166755 cri.go:89] found id: ""
	I1004 01:55:50.659817  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:50.659888  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.664188  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:50.664260  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:50.709318  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:50.709346  166755 cri.go:89] found id: ""
	I1004 01:55:50.709358  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:50.709422  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.713604  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:50.713674  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:50.757565  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:50.757597  166755 cri.go:89] found id: ""
	I1004 01:55:50.757607  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:50.757666  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.761646  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:50.761711  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:50.802683  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:50.802712  166755 cri.go:89] found id: ""
	I1004 01:55:50.802722  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:50.802785  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.807369  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:50.807443  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:50.849917  166755 cri.go:89] found id: ""
	I1004 01:55:50.849952  166755 logs.go:284] 0 containers: []
	W1004 01:55:50.849965  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:50.849974  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:50.850042  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:50.889329  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:50.889353  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:50.889357  166755 cri.go:89] found id: ""
	I1004 01:55:50.889365  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:50.889489  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.894295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.898319  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:50.898345  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:50.950303  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:50.950339  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.989731  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:50.989767  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:51.036483  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:51.036526  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:51.094053  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:51.094109  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:51.234887  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:51.234922  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:51.283233  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:51.283276  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:51.340569  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:51.340610  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:51.751585  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:51.751629  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:51.765404  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:51.765446  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:51.813579  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:51.813611  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:51.853408  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:51.853458  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:48.670114  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:51.169274  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:53.482075  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:56.554101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:51.899649  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:51.899686  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.447493  166755 api_server.go:253] Checking apiserver healthz at https://192.168.83.165:8443/healthz ...
	I1004 01:55:54.453104  166755 api_server.go:279] https://192.168.83.165:8443/healthz returned 200:
	ok
	I1004 01:55:54.455299  166755 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:54.455327  166755 api_server.go:131] duration metric: took 3.932514868s to wait for apiserver health ...
	I1004 01:55:54.455338  166755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:54.455368  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:54.455431  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:54.501159  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:54.501180  166755 cri.go:89] found id: ""
	I1004 01:55:54.501188  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:54.501250  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.506342  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:54.506418  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:54.548780  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:54.548801  166755 cri.go:89] found id: ""
	I1004 01:55:54.548808  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:54.548863  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.560318  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:54.560397  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:54.606477  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:54.606509  166755 cri.go:89] found id: ""
	I1004 01:55:54.606521  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:54.606581  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.611004  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:54.611069  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:54.657003  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.657031  166755 cri.go:89] found id: ""
	I1004 01:55:54.657041  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:54.657106  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.661386  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:54.661459  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:54.713209  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:54.713237  166755 cri.go:89] found id: ""
	I1004 01:55:54.713246  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:54.713295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.718417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:54.718489  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:54.767945  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:54.767969  166755 cri.go:89] found id: ""
	I1004 01:55:54.767979  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:54.768040  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.772488  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:54.772576  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:54.823905  166755 cri.go:89] found id: ""
	I1004 01:55:54.823935  166755 logs.go:284] 0 containers: []
	W1004 01:55:54.823945  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:54.823954  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:54.824017  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:54.878037  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:54.878069  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:54.878076  166755 cri.go:89] found id: ""
	I1004 01:55:54.878086  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:54.878146  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.883456  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.887685  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:54.887708  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:55.021714  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:55.021761  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:55.066557  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:55.066595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:55.125278  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:55.125336  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:55.170570  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:55.170607  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:55.212833  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:55.212866  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:55.552035  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:55.552080  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:55.601698  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:55.601738  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:55.662745  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:55.662786  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:55.707632  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:55.707665  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:55.746461  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:55.746489  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:55.809111  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:55.809150  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:55.850557  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:55.850595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:53.670067  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:55.670340  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:58.374828  166755 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:58.374864  166755 system_pods.go:61] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.374871  166755 system_pods.go:61] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.374878  166755 system_pods.go:61] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.374885  166755 system_pods.go:61] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.374891  166755 system_pods.go:61] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.374898  166755 system_pods.go:61] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.374909  166755 system_pods.go:61] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.374919  166755 system_pods.go:61] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.374934  166755 system_pods.go:74] duration metric: took 3.919586902s to wait for pod list to return data ...
	I1004 01:55:58.374943  166755 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:58.379203  166755 default_sa.go:45] found service account: "default"
	I1004 01:55:58.379228  166755 default_sa.go:55] duration metric: took 4.271125ms for default service account to be created ...
	I1004 01:55:58.379237  166755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:58.389346  166755 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:58.389369  166755 system_pods.go:89] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.389375  166755 system_pods.go:89] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.389379  166755 system_pods.go:89] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.389384  166755 system_pods.go:89] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.389388  166755 system_pods.go:89] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.389391  166755 system_pods.go:89] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.389399  166755 system_pods.go:89] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.389404  166755 system_pods.go:89] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.389411  166755 system_pods.go:126] duration metric: took 10.168718ms to wait for k8s-apps to be running ...
	I1004 01:55:58.389422  166755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:58.389467  166755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:58.410785  166755 system_svc.go:56] duration metric: took 21.353423ms WaitForService to wait for kubelet.
	I1004 01:55:58.410814  166755 kubeadm.go:581] duration metric: took 4m24.523994722s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:58.410840  166755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:58.414873  166755 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:58.414899  166755 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:58.414913  166755 node_conditions.go:105] duration metric: took 4.067596ms to run NodePressure ...
	I1004 01:55:58.414927  166755 start.go:228] waiting for startup goroutines ...
	I1004 01:55:58.414936  166755 start.go:233] waiting for cluster config update ...
	I1004 01:55:58.414948  166755 start.go:242] writing updated cluster config ...
	I1004 01:55:58.415228  166755 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:58.469095  166755 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:58.470860  166755 out.go:177] * Done! kubectl is now configured to use "no-preload-273516" cluster and "default" namespace by default
	I1004 01:55:57.863028  167496 pod_ready.go:81] duration metric: took 4m0.000377885s waiting for pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:57.863064  167496 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:57.863085  167496 pod_ready.go:38] duration metric: took 4m1.198718353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:57.863115  167496 kubeadm.go:640] restartCluster took 5m18.524534819s
	W1004 01:55:57.863173  167496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:55:57.863207  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:56:02.773154  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.909900495s)
	I1004 01:56:02.773229  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:02.786455  167496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:56:02.796780  167496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:56:02.806618  167496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:56:02.806677  167496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1004 01:56:02.872853  167496 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1004 01:56:02.872972  167496 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:56:03.024967  167496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:56:03.025128  167496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:56:03.025294  167496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:56:03.249926  167496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:56:03.251503  167496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:56:03.259788  167496 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1004 01:56:03.380740  167496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:56:03.382796  167496 out.go:204]   - Generating certificates and keys ...
	I1004 01:56:03.382964  167496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:56:03.383087  167496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:56:03.383195  167496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:56:03.383291  167496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:56:03.383404  167496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:56:03.383494  167496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:56:03.383899  167496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:56:03.384184  167496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:56:03.384678  167496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:56:03.385233  167496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:56:03.385302  167496 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:56:03.385358  167496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:56:03.892124  167496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:56:04.106548  167496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:56:04.323375  167496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:56:04.510112  167496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:56:04.512389  167496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:56:02.634095  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:05.710104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:04.514200  167496 out.go:204]   - Booting up control plane ...
	I1004 01:56:04.514318  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:56:04.523675  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:56:04.534185  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:56:04.535396  167496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:56:04.551484  167496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:56:11.786134  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:14.564099  167496 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.011014 seconds
	I1004 01:56:14.564257  167496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:56:14.578656  167496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:56:15.106513  167496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:56:15.106688  167496 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-107182 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1004 01:56:15.616926  167496 kubeadm.go:322] [bootstrap-token] Using token: ocks1c.c9c0w76e1jxk27wy
	I1004 01:56:15.619692  167496 out.go:204]   - Configuring RBAC rules ...
	I1004 01:56:15.619849  167496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:56:15.627037  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:56:15.631821  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:56:15.635639  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:56:15.641343  167496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:56:15.709440  167496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:56:16.046524  167496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:56:16.046544  167496 kubeadm.go:322] 
	I1004 01:56:16.046605  167496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:56:16.046616  167496 kubeadm.go:322] 
	I1004 01:56:16.046691  167496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:56:16.046698  167496 kubeadm.go:322] 
	I1004 01:56:16.046727  167496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:56:16.046781  167496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:56:16.046877  167496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:56:16.046902  167496 kubeadm.go:322] 
	I1004 01:56:16.046980  167496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:56:16.047101  167496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:56:16.047198  167496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:56:16.047210  167496 kubeadm.go:322] 
	I1004 01:56:16.047316  167496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1004 01:56:16.047429  167496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:56:16.047448  167496 kubeadm.go:322] 
	I1004 01:56:16.047560  167496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.047736  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:56:16.047783  167496 kubeadm.go:322]     --control-plane 	  
	I1004 01:56:16.047790  167496 kubeadm.go:322] 
	I1004 01:56:16.047912  167496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:56:16.047926  167496 kubeadm.go:322] 
	I1004 01:56:16.048006  167496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.048141  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:56:16.048764  167496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:56:16.048792  167496 cni.go:84] Creating CNI manager for ""
	I1004 01:56:16.048803  167496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:56:16.051468  167496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:56:14.858093  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:16.052923  167496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:56:16.062452  167496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:56:16.083093  167496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:56:16.083231  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.083232  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=old-k8s-version-107182 minikube.k8s.io/updated_at=2023_10_04T01_56_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.097641  167496 ops.go:34] apiserver oom_adj: -16
	I1004 01:56:16.345591  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.432507  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:17.021142  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.938186  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:17.521246  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.020458  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.521120  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.020993  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.521313  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.020752  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.520524  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.020817  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.521038  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:22.020893  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.014159  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:22.520834  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.021375  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.521450  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.021541  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.521194  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.021420  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.521388  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.020861  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.520474  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:27.020520  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.094110  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:27.520733  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.020857  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.520471  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.020869  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.520801  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.020670  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.521376  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.021462  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.521133  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.021118  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.139808  167496 kubeadm.go:1081] duration metric: took 16.056644408s to wait for elevateKubeSystemPrivileges.
	I1004 01:56:32.139853  167496 kubeadm.go:406] StartCluster complete in 5m52.878327636s
	I1004 01:56:32.139879  167496 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.139983  167496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:56:32.143255  167496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.143507  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:56:32.143608  167496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:56:32.143692  167496 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143710  167496 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-107182"
	I1004 01:56:32.143708  167496 config.go:182] Loaded profile config "old-k8s-version-107182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1004 01:56:32.143717  167496 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:56:32.143732  167496 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143751  167496 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-107182"
	W1004 01:56:32.143762  167496 addons.go:240] addon metrics-server should already be in state true
	I1004 01:56:32.143777  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143807  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143717  167496 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143830  167496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-107182"
	I1004 01:56:32.144169  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144206  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144216  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144236  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144237  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144317  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.161736  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1004 01:56:32.161739  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I1004 01:56:32.162384  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162494  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162735  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I1004 01:56:32.163007  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163024  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163156  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163168  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163232  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.163731  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163747  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163809  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.163851  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164091  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164163  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.164565  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.164611  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.165506  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.165553  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.168699  167496 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-107182"
	W1004 01:56:32.168721  167496 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:56:32.168751  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.169121  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.169148  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.187125  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I1004 01:56:32.187814  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188164  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I1004 01:56:32.188441  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.188462  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.188705  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188823  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1004 01:56:32.188990  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.189161  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.189340  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189357  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189428  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.189669  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189688  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189750  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190009  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.190037  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190736  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.190776  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.191392  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.193250  167496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:56:32.192019  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.194795  167496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.194811  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:56:32.194833  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196365  167496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:56:32.197757  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:56:32.197778  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:56:32.197798  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196532  167496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-107182" context rescaled to 1 replicas
	I1004 01:56:32.197859  167496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:56:32.199796  167496 out.go:177] * Verifying Kubernetes components...
	I1004 01:56:32.201368  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:32.202167  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202462  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202766  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.202794  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203229  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203304  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.203321  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203485  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203677  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.203744  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.204034  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204104  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204194  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.204755  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.211128  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I1004 01:56:32.211596  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.212134  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.212157  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.212528  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.212740  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.214335  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.214592  167496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.214608  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:56:32.214627  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.217280  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.217751  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.217781  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.218036  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.218202  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.218378  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.218528  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.390605  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.392051  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.434602  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:56:32.434629  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:56:32.469744  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:56:32.469793  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:56:32.488555  167496 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.489370  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:56:32.500794  167496 node_ready.go:49] node "old-k8s-version-107182" has status "Ready":"True"
	I1004 01:56:32.500818  167496 node_ready.go:38] duration metric: took 12.232731ms waiting for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.500828  167496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:32.514535  167496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:32.515832  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:32.515859  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:56:32.582811  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:33.449546  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05890047s)
	I1004 01:56:33.449619  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.449635  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450076  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450100  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450113  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.450115  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.450139  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450431  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450454  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450503  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.468938  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.468964  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.469311  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.469332  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.700534  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308435267s)
	I1004 01:56:33.700563  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.211163368s)
	I1004 01:56:33.700582  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.700596  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.700593  167496 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1004 01:56:33.700975  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.700998  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.701010  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.701012  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701021  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.701273  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701321  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.701330  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823328  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240468144s)
	I1004 01:56:33.823384  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823398  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.823769  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.823805  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823819  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823832  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.824142  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.824164  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.824176  167496 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-107182"
	I1004 01:56:33.825973  167496 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 01:56:33.162156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:33.827977  167496 addons.go:502] enable addons completed in 1.684381662s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 01:56:34.532496  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:37.031254  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:39.242136  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:39.031853  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:41.531371  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:42.314165  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:44.032920  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:44.533712  167496 pod_ready.go:92] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.533740  167496 pod_ready.go:81] duration metric: took 12.019178851s waiting for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.533753  167496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539300  167496 pod_ready.go:92] pod "kube-proxy-8lcf5" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.539327  167496 pod_ready.go:81] duration metric: took 5.564927ms waiting for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539337  167496 pod_ready.go:38] duration metric: took 12.038496722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:44.539360  167496 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:56:44.539419  167496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:56:44.554851  167496 api_server.go:72] duration metric: took 12.356945821s to wait for apiserver process to appear ...
	I1004 01:56:44.554881  167496 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:56:44.554900  167496 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I1004 01:56:44.562352  167496 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I1004 01:56:44.563304  167496 api_server.go:141] control plane version: v1.16.0
	I1004 01:56:44.563333  167496 api_server.go:131] duration metric: took 8.444498ms to wait for apiserver health ...
	I1004 01:56:44.563344  167496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:56:44.567672  167496 system_pods.go:59] 4 kube-system pods found
	I1004 01:56:44.567701  167496 system_pods.go:61] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.567708  167496 system_pods.go:61] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.567719  167496 system_pods.go:61] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.567728  167496 system_pods.go:61] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.567736  167496 system_pods.go:74] duration metric: took 4.384195ms to wait for pod list to return data ...
	I1004 01:56:44.567746  167496 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:56:44.570566  167496 default_sa.go:45] found service account: "default"
	I1004 01:56:44.570597  167496 default_sa.go:55] duration metric: took 2.843182ms for default service account to be created ...
	I1004 01:56:44.570608  167496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:56:44.575497  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.575524  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.575534  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.575543  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.575552  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.575572  167496 retry.go:31] will retry after 201.187376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:44.781105  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.781140  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.781146  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.781155  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.781162  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.781179  167496 retry.go:31] will retry after 304.433498ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.090030  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.090055  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.090061  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.090067  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.090073  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.090088  167496 retry.go:31] will retry after 344.077296ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.439684  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.439712  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.439717  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.439723  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.439729  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.439743  167496 retry.go:31] will retry after 379.883887ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.824813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.824839  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.824844  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.824853  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.824859  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.824873  167496 retry.go:31] will retry after 650.141708ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:46.480447  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:46.480473  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:46.480478  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:46.480486  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:46.480492  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:46.480507  167496 retry.go:31] will retry after 870.616376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:47.356424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:47.356452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:47.356457  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:47.356464  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:47.356470  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:47.356486  167496 retry.go:31] will retry after 972.499927ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:48.394163  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:51.466067  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:48.333234  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:48.333263  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:48.333269  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:48.333276  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:48.333282  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:48.333296  167496 retry.go:31] will retry after 1.071674914s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:49.410813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:49.410843  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:49.410853  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:49.410864  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:49.410873  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:49.410892  167496 retry.go:31] will retry after 1.833649065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:51.251023  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:51.251046  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:51.251052  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:51.251058  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:51.251065  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:51.251080  167496 retry.go:31] will retry after 1.914402614s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:53.170633  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:53.170675  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:53.170684  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:53.170697  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:53.170706  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:53.170727  167496 retry.go:31] will retry after 2.900802753s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:56.077479  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:56.077505  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:56.077510  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:56.077517  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:56.077523  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:56.077539  167496 retry.go:31] will retry after 2.931373296s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:57.546142  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:00.618191  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:59.014602  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:59.014631  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:59.014639  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:59.014650  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:59.014658  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:59.014679  167496 retry.go:31] will retry after 3.641834809s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:06.698118  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:02.662919  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:02.662957  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:02.662962  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:02.662978  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:02.662986  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:02.663000  167496 retry.go:31] will retry after 5.249216721s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:09.770058  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:07.918510  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:07.918540  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:07.918545  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:07.918551  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:07.918558  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:07.918575  167496 retry.go:31] will retry after 5.21551618s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:15.850131  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:13.139424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:13.139452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:13.139461  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:13.139470  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:13.139480  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:13.139499  167496 retry.go:31] will retry after 6.379920631s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:18.922143  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:19.525272  167496 system_pods.go:86] 5 kube-system pods found
	I1004 01:57:19.525311  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:19.525322  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Pending
	I1004 01:57:19.525329  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:19.525340  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:19.525350  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:19.525372  167496 retry.go:31] will retry after 7.200178423s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:25.002152  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:26.734572  167496 system_pods.go:86] 6 kube-system pods found
	I1004 01:57:26.734603  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:26.734610  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:26.734615  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:26.734619  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:26.734626  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:26.734640  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:26.734662  167496 retry.go:31] will retry after 10.892871067s: missing components: etcd, kube-apiserver
	I1004 01:57:28.078109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:34.158104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:37.634963  167496 system_pods.go:86] 8 kube-system pods found
	I1004 01:57:37.634993  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:37.634998  167496 system_pods.go:89] "etcd-old-k8s-version-107182" [18310540-21e4-4225-9ce0-e662fae16ca5] Running
	I1004 01:57:37.635003  167496 system_pods.go:89] "kube-apiserver-old-k8s-version-107182" [7418c38e-cae2-4d96-bb43-6827c37fc3dd] Running
	I1004 01:57:37.635008  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:37.635012  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:37.635015  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:37.635023  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:37.635028  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:37.635035  167496 system_pods.go:126] duration metric: took 53.064420406s to wait for k8s-apps to be running ...
	I1004 01:57:37.635042  167496 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:57:37.635088  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:57:37.654311  167496 system_svc.go:56] duration metric: took 19.259695ms WaitForService to wait for kubelet.
	I1004 01:57:37.654335  167496 kubeadm.go:581] duration metric: took 1m5.456439597s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:57:37.654358  167496 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:57:37.658645  167496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:57:37.658691  167496 node_conditions.go:123] node cpu capacity is 2
	I1004 01:57:37.658730  167496 node_conditions.go:105] duration metric: took 4.365872ms to run NodePressure ...
	I1004 01:57:37.658744  167496 start.go:228] waiting for startup goroutines ...
	I1004 01:57:37.658753  167496 start.go:233] waiting for cluster config update ...
	I1004 01:57:37.658763  167496 start.go:242] writing updated cluster config ...
	I1004 01:57:37.659093  167496 ssh_runner.go:195] Run: rm -f paused
	I1004 01:57:37.707603  167496 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1004 01:57:37.709678  167496 out.go:177] 
	W1004 01:57:37.711433  167496 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1004 01:57:37.713148  167496 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1004 01:57:37.714765  167496 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-107182" cluster and "default" namespace by default
	I1004 01:57:37.226085  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:43.306106  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:46.378086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:49.379613  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:57:49.379686  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:57:49.381326  169515 machine.go:91] provisioned docker machine in 4m37.42034364s
	I1004 01:57:49.381400  169515 fix.go:56] fixHost completed within 4m37.441947276s
	I1004 01:57:49.381413  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 4m37.441976851s
	W1004 01:57:49.381431  169515 start.go:688] error starting host: provision: host is not running
	W1004 01:57:49.381511  169515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1004 01:57:49.381527  169515 start.go:703] Will try again in 5 seconds ...
	I1004 01:57:54.381970  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:57:54.382105  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 82.376µs
	I1004 01:57:54.382139  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:57:54.382148  169515 fix.go:54] fixHost starting: 
	I1004 01:57:54.382415  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:57:54.382441  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:57:54.397922  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I1004 01:57:54.398391  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:57:54.398857  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:57:54.398879  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:57:54.399227  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:57:54.399426  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:57:54.399606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:57:54.401353  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Stopped err=<nil>
	I1004 01:57:54.401379  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	W1004 01:57:54.401556  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:57:54.403451  169515 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-239802" ...
	I1004 01:57:54.404883  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Start
	I1004 01:57:54.405065  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring networks are active...
	I1004 01:57:54.405797  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network default is active
	I1004 01:57:54.406184  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network mk-default-k8s-diff-port-239802 is active
	I1004 01:57:54.406630  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Getting domain xml...
	I1004 01:57:54.407374  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Creating domain...
	I1004 01:57:55.768364  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting to get IP...
	I1004 01:57:55.769252  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769744  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769819  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.769720  170429 retry.go:31] will retry after 205.391459ms: waiting for machine to come up
	I1004 01:57:55.977260  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977696  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977721  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.977651  170429 retry.go:31] will retry after 308.679034ms: waiting for machine to come up
	I1004 01:57:56.288223  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288707  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288740  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.288656  170429 retry.go:31] will retry after 419.166959ms: waiting for machine to come up
	I1004 01:57:56.708911  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709549  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709581  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.709483  170429 retry.go:31] will retry after 402.015435ms: waiting for machine to come up
	I1004 01:57:57.113100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113682  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113735  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.113608  170429 retry.go:31] will retry after 555.795777ms: waiting for machine to come up
	I1004 01:57:57.671427  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672087  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672124  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.671985  170429 retry.go:31] will retry after 891.745334ms: waiting for machine to come up
	I1004 01:57:58.564986  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565501  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565533  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:58.565436  170429 retry.go:31] will retry after 897.272137ms: waiting for machine to come up
	I1004 01:57:59.465110  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465742  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465773  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:59.465695  170429 retry.go:31] will retry after 1.042370898s: waiting for machine to come up
	I1004 01:58:00.509812  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510320  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:00.510296  170429 retry.go:31] will retry after 1.512718285s: waiting for machine to come up
	I1004 01:58:02.024160  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024566  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:02.024502  170429 retry.go:31] will retry after 1.493800744s: waiting for machine to come up
	I1004 01:58:03.520361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520958  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520991  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:03.520911  170429 retry.go:31] will retry after 2.206730553s: waiting for machine to come up
	I1004 01:58:05.729534  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730016  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730050  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:05.729969  170429 retry.go:31] will retry after 3.088350315s: waiting for machine to come up
	I1004 01:58:08.820266  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820743  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820774  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:08.820689  170429 retry.go:31] will retry after 2.773482095s: waiting for machine to come up
	I1004 01:58:11.595977  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596515  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596540  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:11.596475  170429 retry.go:31] will retry after 3.486376696s: waiting for machine to come up
	I1004 01:58:15.084904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.085418  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Found IP for machine: 192.168.61.105
	I1004 01:58:15.085447  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserving static IP address...
	I1004 01:58:15.085460  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has current primary IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.086007  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.086039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserved static IP address: 192.168.61.105
	I1004 01:58:15.086059  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | skip adding static IP to network mk-default-k8s-diff-port-239802 - found existing host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"}
	I1004 01:58:15.086080  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Getting to WaitForSSH function...
	I1004 01:58:15.086098  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for SSH to be available...
	I1004 01:58:15.088134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088506  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.088538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088726  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH client type: external
	I1004 01:58:15.088751  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa (-rw-------)
	I1004 01:58:15.088802  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:58:15.088817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | About to run SSH command:
	I1004 01:58:15.088829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | exit 0
	I1004 01:58:15.226051  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | SSH cmd err, output: <nil>: 
	I1004 01:58:15.226408  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetConfigRaw
	I1004 01:58:15.227055  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.229669  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.230108  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230390  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:58:15.230651  169515 machine.go:88] provisioning docker machine ...
	I1004 01:58:15.230676  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:15.230912  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231113  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:58:15.231134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231297  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.233606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.233990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.234026  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.234134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.234317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234484  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234663  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.234867  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.235199  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.235213  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:58:15.374541  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-239802
	
	I1004 01:58:15.374573  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.377761  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378278  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.378321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378494  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.378705  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378967  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.379135  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.379569  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.379594  169515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-239802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-239802/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-239802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:58:15.520076  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:58:15.520107  169515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:58:15.520129  169515 buildroot.go:174] setting up certificates
	I1004 01:58:15.520141  169515 provision.go:83] configureAuth start
	I1004 01:58:15.520155  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.520502  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.523317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.523814  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.523854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.524058  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.526453  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526752  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.526794  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526920  169515 provision.go:138] copyHostCerts
	I1004 01:58:15.526985  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:58:15.527069  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:58:15.527197  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:58:15.527323  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:58:15.527337  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:58:15.527373  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:58:15.527450  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:58:15.527460  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:58:15.527490  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:58:15.527550  169515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-239802 san=[192.168.61.105 192.168.61.105 localhost 127.0.0.1 minikube default-k8s-diff-port-239802]
	I1004 01:58:15.632152  169515 provision.go:172] copyRemoteCerts
	I1004 01:58:15.632211  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:58:15.632236  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.635344  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.635733  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635886  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.636100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.636262  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.636411  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:15.731442  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1004 01:58:15.755690  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 01:58:15.781135  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:58:15.805779  169515 provision.go:86] duration metric: configureAuth took 285.621049ms
	I1004 01:58:15.805813  169515 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:58:15.806097  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:58:15.806193  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.809186  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.809648  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.810105  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810354  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810577  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.810822  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.811265  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.811283  169515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:58:16.145471  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:58:16.145515  169515 machine.go:91] provisioned docker machine in 914.847777ms
	I1004 01:58:16.145528  169515 start.go:300] post-start starting for "default-k8s-diff-port-239802" (driver="kvm2")
	I1004 01:58:16.145541  169515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:58:16.145564  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.145936  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:58:16.145970  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.148759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149272  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.149306  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149563  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.149803  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.150023  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.150185  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.245579  169515 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:58:16.250364  169515 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:58:16.250394  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:58:16.250472  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:58:16.250566  169515 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:58:16.250821  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:58:16.260991  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:16.283999  169515 start.go:303] post-start completed in 138.45373ms
	I1004 01:58:16.284022  169515 fix.go:56] fixHost completed within 21.901874601s
	I1004 01:58:16.284043  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.286817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287150  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.287174  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287383  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.287598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287848  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.288010  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:16.288381  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:16.288414  169515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:58:16.418775  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696384696.400645117
	
	I1004 01:58:16.418799  169515 fix.go:206] guest clock: 1696384696.400645117
	I1004 01:58:16.418806  169515 fix.go:219] Guest: 2023-10-04 01:58:16.400645117 +0000 UTC Remote: 2023-10-04 01:58:16.284026062 +0000 UTC m=+304.486597710 (delta=116.619055ms)
	I1004 01:58:16.418832  169515 fix.go:190] guest clock delta is within tolerance: 116.619055ms
	I1004 01:58:16.418837  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 22.036713239s
	I1004 01:58:16.418861  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.419152  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:16.421829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422225  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.422265  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.422990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423191  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423288  169515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:58:16.423361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.423400  169515 ssh_runner.go:195] Run: cat /version.json
	I1004 01:58:16.423430  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.426244  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426412  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426666  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426835  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.426903  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426928  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.427049  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427079  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.427257  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427305  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427389  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.427491  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427616  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.541652  169515 ssh_runner.go:195] Run: systemctl --version
	I1004 01:58:16.548207  169515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:58:16.689236  169515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 01:58:16.695609  169515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:58:16.695700  169515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:58:16.711541  169515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:58:16.711569  169515 start.go:469] detecting cgroup driver to use...
	I1004 01:58:16.711648  169515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:58:16.727693  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:58:16.741081  169515 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:58:16.741145  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:58:16.754740  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:58:16.768697  169515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:58:16.892808  169515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:58:17.012129  169515 docker.go:213] disabling docker service ...
	I1004 01:58:17.012203  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:58:17.027872  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:58:17.039804  169515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:58:17.138577  169515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:58:17.242819  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:58:17.255768  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:58:17.273761  169515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:58:17.273824  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.284028  169515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:58:17.284103  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.294763  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.304668  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.314305  169515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:58:17.324280  169515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:58:17.333123  169515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:58:17.333181  169515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:58:17.346921  169515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:58:17.357411  169515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:58:17.466076  169515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:58:17.665370  169515 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:58:17.665446  169515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:58:17.671020  169515 start.go:537] Will wait 60s for crictl version
	I1004 01:58:17.671103  169515 ssh_runner.go:195] Run: which crictl
	I1004 01:58:17.675046  169515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:58:17.711171  169515 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:58:17.711255  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.764684  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.818887  169515 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:58:17.820580  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:17.823598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824003  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:17.824039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824180  169515 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 01:58:17.828529  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:17.842201  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:58:17.842277  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:17.889167  169515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 01:58:17.889260  169515 ssh_runner.go:195] Run: which lz4
	I1004 01:58:17.893479  169515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:58:17.898162  169515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:58:17.898208  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 01:58:19.729377  169515 crio.go:444] Took 1.835934 seconds to copy over tarball
	I1004 01:58:19.729456  169515 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:58:22.593494  169515 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.864005818s)
	I1004 01:58:22.593526  169515 crio.go:451] Took 2.864115 seconds to extract the tarball
	I1004 01:58:22.593541  169515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:58:22.637806  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:22.688382  169515 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:58:22.688411  169515 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:58:22.688492  169515 ssh_runner.go:195] Run: crio config
	I1004 01:58:22.763035  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:22.763056  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:22.763523  169515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:58:22.763558  169515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-239802 NodeName:default-k8s-diff-port-239802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:58:22.763710  169515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-239802"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:58:22.763781  169515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-239802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1004 01:58:22.763836  169515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:58:22.772839  169515 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:58:22.772912  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:58:22.781165  169515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1004 01:58:22.799884  169515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:58:22.817806  169515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1004 01:58:22.836379  169515 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1004 01:58:22.840577  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:22.854009  169515 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802 for IP: 192.168.61.105
	I1004 01:58:22.854051  169515 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:58:22.854225  169515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:58:22.854280  169515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:58:22.854390  169515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/client.key
	I1004 01:58:22.854470  169515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key.c44c9625
	I1004 01:58:22.854525  169515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key
	I1004 01:58:22.854676  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:58:22.854716  169515 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:58:22.854731  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:58:22.854795  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:58:22.854841  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:58:22.854874  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:58:22.854936  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:22.855704  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:58:22.883055  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:58:22.909260  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:58:22.936140  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 01:58:22.963068  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:58:22.990358  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:58:23.019293  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:58:23.046021  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:58:23.072727  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:58:23.099530  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:58:23.125965  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:58:23.152909  169515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:58:23.171043  169515 ssh_runner.go:195] Run: openssl version
	I1004 01:58:23.177062  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:58:23.187693  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192607  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192695  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.198687  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:58:23.208870  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:58:23.220345  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225134  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225205  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.230830  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:58:23.241519  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:58:23.251661  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256671  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256740  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.263041  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:58:23.272914  169515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:58:23.277650  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:58:23.283889  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:58:23.289960  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:58:23.295853  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:58:23.302386  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:58:23.308626  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:58:23.315173  169515 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:58:23.315270  169515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:58:23.315329  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:23.360078  169515 cri.go:89] found id: ""
	I1004 01:58:23.360160  169515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:58:23.370577  169515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1004 01:58:23.370607  169515 kubeadm.go:636] restartCluster start
	I1004 01:58:23.370670  169515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 01:58:23.380554  169515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.382064  169515 kubeconfig.go:92] found "default-k8s-diff-port-239802" server: "https://192.168.61.105:8444"
	I1004 01:58:23.384489  169515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 01:58:23.394552  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.394621  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.406027  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.406050  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.406088  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.416731  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.917459  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.917567  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.929055  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.417118  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.417196  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.429944  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.917530  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.917640  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.928908  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.417526  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.417598  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.429815  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.917482  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.917579  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.928966  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.417583  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.417703  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.429371  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.917165  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.917259  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.929210  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.417701  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.417803  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.429305  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.917024  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.928702  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.417024  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.417142  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.428772  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.917340  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.917439  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.929099  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.417234  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.417333  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.429431  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.916874  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.916967  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.928613  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.417157  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.417247  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.429364  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.917013  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.928682  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.417225  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.417328  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.429087  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.917131  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.917218  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.929475  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.416979  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.417061  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.431474  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.917018  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.917123  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.929083  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:33.394900  169515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1004 01:58:33.394937  169515 kubeadm.go:1128] stopping kube-system containers ...
	I1004 01:58:33.394955  169515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 01:58:33.395025  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:33.439584  169515 cri.go:89] found id: ""
	I1004 01:58:33.439676  169515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 01:58:33.455188  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:58:33.464838  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:58:33.464909  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473594  169515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473622  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:33.606598  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.496399  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.698397  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.778632  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.858383  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:58:34.858475  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:34.871386  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.384197  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.884575  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.383599  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.883552  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.384513  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.409737  169515 api_server.go:72] duration metric: took 2.551352833s to wait for apiserver process to appear ...
	I1004 01:58:37.409768  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:58:37.409791  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410400  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.410464  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410871  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.911616  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.733688  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.733788  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.733802  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.789718  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.789758  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.911398  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.919484  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:41.919510  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.411543  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.417441  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.417474  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.910983  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.918972  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.918999  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:43.411752  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:43.418030  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 01:58:43.429647  169515 api_server.go:141] control plane version: v1.28.2
	I1004 01:58:43.429678  169515 api_server.go:131] duration metric: took 6.019900977s to wait for apiserver health ...
	I1004 01:58:43.429690  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:43.429697  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:43.431972  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:58:43.433484  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:58:43.447694  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:58:43.471374  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:58:43.481660  169515 system_pods.go:59] 8 kube-system pods found
	I1004 01:58:43.481703  169515 system_pods.go:61] "coredns-5dd5756b68-ntmdn" [93a30dd9-0d38-4648-9291-703928437ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 01:58:43.481716  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [387a9b5c-12b7-4be8-ab2a-a05f15640f17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 01:58:43.481725  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [a9900212-1372-410f-b6d9-105f78dfde92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 01:58:43.481735  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [d9684911-65f2-4b81-800a-9d99b277b7e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 01:58:43.481747  169515 system_pods.go:61] "kube-proxy-v9qw4" [6db82ea2-130c-4f40-ae3e-2abe4fdb2860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 01:58:43.481757  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [98b82b29-64c3-4042-bf6b-040b05992648] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 01:58:43.481770  169515 system_pods.go:61] "metrics-server-57f55c9bc5-hxrqk" [94e85ebf-dba5-4975-8167-bc23dc74b5f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:58:43.481789  169515 system_pods.go:61] "storage-provisioner" [11d1866b-ef0b-4b12-a2d3-a38fe68f5184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 01:58:43.481801  169515 system_pods.go:74] duration metric: took 10.402243ms to wait for pod list to return data ...
	I1004 01:58:43.481815  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:58:43.485997  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:58:43.486041  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 01:58:43.486056  169515 node_conditions.go:105] duration metric: took 4.234155ms to run NodePressure ...
	I1004 01:58:43.486078  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:43.740784  169515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749933  169515 kubeadm.go:787] kubelet initialised
	I1004 01:58:43.749956  169515 kubeadm.go:788] duration metric: took 9.146841ms waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749964  169515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:58:43.762449  169515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:45.795545  169515 pod_ready.go:102] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:47.294570  169515 pod_ready.go:92] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:47.294593  169515 pod_ready.go:81] duration metric: took 3.532106169s waiting for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:47.294629  169515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:49.318426  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.320090  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.819783  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.819808  169515 pod_ready.go:81] duration metric: took 4.525169791s waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.819820  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825714  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.825738  169515 pod_ready.go:81] duration metric: took 5.910346ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825750  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345345  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.345375  169515 pod_ready.go:81] duration metric: took 519.614193ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345388  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351098  169515 pod_ready.go:92] pod "kube-proxy-v9qw4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.351115  169515 pod_ready.go:81] duration metric: took 5.721421ms waiting for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351123  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675957  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.675986  169515 pod_ready.go:81] duration metric: took 324.855954ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675999  169515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:54.985434  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:56.986014  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:59.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:01.984178  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:03.986718  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:06.486121  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:08.986286  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:10.988493  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:13.487313  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:15.986463  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:17.987092  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:20.484986  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:22.985012  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:25.486297  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:27.988254  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:30.486124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:32.486163  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:34.986124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:36.986217  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:39.485494  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:41.485638  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:43.987966  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:46.484556  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:48.984057  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:50.984900  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:53.483808  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:55.484765  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:57.485763  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:59.985726  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:02.484831  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:04.985989  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:07.485664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:09.485893  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:11.985932  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:13.986799  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:16.488334  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:18.985949  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:21.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:23.986108  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:26.486381  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:28.984912  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:31.484885  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:33.485511  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:35.485786  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:37.985061  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:40.486400  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:42.985255  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:45.485905  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:47.985646  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:49.988812  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:52.485077  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:54.485567  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:56.486128  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:58.486811  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:00.985292  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:02.985432  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:04.990218  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:07.485695  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:09.485758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:11.985237  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:13.988632  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:16.486921  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:18.986300  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:21.486008  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:23.990988  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:26.486730  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:28.984846  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:30.985403  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:32.985500  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:34.989615  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:37.485216  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:39.985745  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:42.485969  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:44.984000  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:46.984954  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:49.485168  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:51.986705  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:53.987005  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:56.484664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:58.485697  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:00.486876  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:02.986832  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:05.485817  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:07.486977  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:09.984945  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:11.985637  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:13.985859  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:16.484825  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:18.485020  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:20.485388  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:22.486622  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:24.985561  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:27.484794  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:29.986684  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:32.494495  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:34.984951  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:36.985082  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:38.987881  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:41.485453  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:43.486758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:45.983941  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:47.984452  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:50.486243  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:52.676831  169515 pod_ready.go:81] duration metric: took 4m0.000812817s waiting for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	E1004 02:02:52.676871  169515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 02:02:52.676911  169515 pod_ready.go:38] duration metric: took 4m8.926937921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:02:52.676950  169515 kubeadm.go:640] restartCluster took 4m29.306332407s
	W1004 02:02:52.677028  169515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 02:02:52.677066  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 02:03:06.687598  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.010492171s)
	I1004 02:03:06.687683  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:06.702277  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:03:06.711887  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:03:06.721545  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:03:06.721606  169515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:03:06.964165  169515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:03:17.591049  169515 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:03:17.591142  169515 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:03:17.591233  169515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:03:17.591398  169515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:03:17.591561  169515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:03:17.591679  169515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:03:17.593418  169515 out.go:204]   - Generating certificates and keys ...
	I1004 02:03:17.593514  169515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:03:17.593593  169515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:03:17.593716  169515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 02:03:17.593817  169515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 02:03:17.593913  169515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 02:03:17.593964  169515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 02:03:17.594015  169515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 02:03:17.594064  169515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 02:03:17.594137  169515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 02:03:17.594216  169515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 02:03:17.594254  169515 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 02:03:17.594318  169515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:03:17.594374  169515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:03:17.594446  169515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:03:17.594525  169515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:03:17.594596  169515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:03:17.594701  169515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:03:17.594785  169515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:03:17.596492  169515 out.go:204]   - Booting up control plane ...
	I1004 02:03:17.596593  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:03:17.596678  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:03:17.596767  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:03:17.596903  169515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:03:17.597026  169515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:03:17.597087  169515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:03:17.597271  169515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:03:17.597365  169515 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004292 seconds
	I1004 02:03:17.597507  169515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:03:17.597663  169515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:03:17.597752  169515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:03:17.598019  169515 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-239802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:03:17.598091  169515 kubeadm.go:322] [bootstrap-token] Using token: 23w16s.bx0je8b3n2xujqpx
	I1004 02:03:17.599777  169515 out.go:204]   - Configuring RBAC rules ...
	I1004 02:03:17.599892  169515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:03:17.600022  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:03:17.600211  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:03:17.600376  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:03:17.600517  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:03:17.600640  169515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:03:17.600774  169515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:03:17.600836  169515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:03:17.600895  169515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:03:17.600908  169515 kubeadm.go:322] 
	I1004 02:03:17.600957  169515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:03:17.600963  169515 kubeadm.go:322] 
	I1004 02:03:17.601026  169515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:03:17.601032  169515 kubeadm.go:322] 
	I1004 02:03:17.601053  169515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:03:17.601102  169515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:03:17.601157  169515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:03:17.601164  169515 kubeadm.go:322] 
	I1004 02:03:17.601213  169515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:03:17.601226  169515 kubeadm.go:322] 
	I1004 02:03:17.601282  169515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:03:17.601289  169515 kubeadm.go:322] 
	I1004 02:03:17.601369  169515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:03:17.601470  169515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:03:17.601584  169515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:03:17.601594  169515 kubeadm.go:322] 
	I1004 02:03:17.601698  169515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:03:17.601780  169515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:03:17.601791  169515 kubeadm.go:322] 
	I1004 02:03:17.601919  169515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602052  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:03:17.602084  169515 kubeadm.go:322] 	--control-plane 
	I1004 02:03:17.602094  169515 kubeadm.go:322] 
	I1004 02:03:17.602212  169515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:03:17.602221  169515 kubeadm.go:322] 
	I1004 02:03:17.602358  169515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602512  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:03:17.602532  169515 cni.go:84] Creating CNI manager for ""
	I1004 02:03:17.602543  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:03:17.605029  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:03:17.606395  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:03:17.633626  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 02:03:17.708983  169515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:03:17.709074  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.709079  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=default-k8s-diff-port-239802 minikube.k8s.io/updated_at=2023_10_04T02_03_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.817989  169515 ops.go:34] apiserver oom_adj: -16
	I1004 02:03:18.073171  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.187308  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.820889  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.320388  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.820323  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.320333  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.821163  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.320330  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.821019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.321019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.821177  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.321168  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.820299  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.320582  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.820863  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.320469  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.820489  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.321120  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.820999  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.321119  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.820996  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.320295  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.821014  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.320832  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.820960  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.321064  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.472351  169515 kubeadm.go:1081] duration metric: took 12.76333985s to wait for elevateKubeSystemPrivileges.
	I1004 02:03:30.472398  169515 kubeadm.go:406] StartCluster complete in 5m7.157236676s
	I1004 02:03:30.472421  169515 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.472516  169515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:03:30.474474  169515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.474744  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:03:30.474777  169515 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:03:30.474868  169515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474889  169515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474894  169515 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474903  169515 addons.go:240] addon storage-provisioner should already be in state true
	I1004 02:03:30.474906  169515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474929  169515 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474938  169515 addons.go:240] addon metrics-server should already be in state true
	I1004 02:03:30.474973  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474985  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474911  169515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-239802"
	I1004 02:03:30.474998  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475437  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475468  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475439  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475657  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.493623  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I1004 02:03:30.493662  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I1004 02:03:30.493781  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I1004 02:03:30.494163  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494166  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494444  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494788  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494790  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494812  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.494815  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495193  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.495213  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.495555  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495810  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.495842  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.496520  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.496559  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.499305  169515 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.499322  169515 addons.go:240] addon default-storageclass should already be in state true
	I1004 02:03:30.499345  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.499914  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.499942  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.514137  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I1004 02:03:30.514752  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.515464  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.515494  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.515576  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I1004 02:03:30.515848  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.515990  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.516030  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.516461  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.516481  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.516840  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.517034  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.518156  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.518191  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I1004 02:03:30.521584  169515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 02:03:30.518793  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.518847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.522961  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:03:30.522981  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:03:30.523002  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.524589  169515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:03:30.523376  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.524627  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.525081  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.525873  169515 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.525888  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:03:30.525904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.526430  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.526461  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.526677  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.530913  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.531170  169515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-239802" context rescaled to 1 replicas
	I1004 02:03:30.531206  169515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:03:30.532986  169515 out.go:177] * Verifying Kubernetes components...
	I1004 02:03:30.531340  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.531757  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.533318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.533937  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.535094  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:30.535197  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.535227  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.535231  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535394  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535440  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.535914  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.535943  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.536116  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.549570  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I1004 02:03:30.550039  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.550714  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.550744  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.551157  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.551347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.553113  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.553403  169515 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.553418  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:03:30.553433  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.555904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556293  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.556318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.556748  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.556908  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.557059  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.745640  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.772975  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:03:30.772997  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 02:03:30.828675  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.862436  169515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.862505  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:03:30.867582  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:03:30.867606  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:03:30.869762  169515 node_ready.go:49] node "default-k8s-diff-port-239802" has status "Ready":"True"
	I1004 02:03:30.869782  169515 node_ready.go:38] duration metric: took 7.313127ms waiting for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.869791  169515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:30.878259  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:30.953707  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:30.953739  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:03:31.080848  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:31.923980  169515 pod_ready.go:97] error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924020  169515 pod_ready.go:81] duration metric: took 1.045735768s waiting for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	E1004 02:03:31.924034  169515 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924041  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.089720  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.344027143s)
	I1004 02:03:33.089798  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.227266643s)
	I1004 02:03:33.089820  169515 start.go:923] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1004 02:03:33.089826  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089749  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261039922s)
	I1004 02:03:33.089847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.089856  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089872  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090197  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090217  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090228  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090226  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090240  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090292  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090310  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090322  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090333  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090332  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090486  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090501  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090993  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.091015  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.120294  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.120321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.120639  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.120660  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379169  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.298272317s)
	I1004 02:03:33.379231  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379247  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379568  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379585  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379595  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379608  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379884  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.379928  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379952  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379965  169515 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-239802"
	I1004 02:03:33.382638  169515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 02:03:33.384185  169515 addons.go:502] enable addons completed in 2.909411548s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 02:03:33.970600  169515 pod_ready.go:92] pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.970634  169515 pod_ready.go:81] duration metric: took 2.046583312s waiting for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.970649  169515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976833  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.976858  169515 pod_ready.go:81] duration metric: took 6.200437ms waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976870  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.983984  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.984006  169515 pod_ready.go:81] duration metric: took 7.126822ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.984016  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269435  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.269462  169515 pod_ready.go:81] duration metric: took 285.437635ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269476  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667111  169515 pod_ready.go:92] pod "kube-proxy-b5ltp" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.667138  169515 pod_ready.go:81] duration metric: took 397.655055ms waiting for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667147  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068656  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:35.068692  169515 pod_ready.go:81] duration metric: took 401.53728ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068706  169515 pod_ready.go:38] duration metric: took 4.198904278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:35.068731  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:03:35.068800  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:03:35.085104  169515 api_server.go:72] duration metric: took 4.553859804s to wait for apiserver process to appear ...
	I1004 02:03:35.085129  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:03:35.085148  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 02:03:35.093144  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 02:03:35.094563  169515 api_server.go:141] control plane version: v1.28.2
	I1004 02:03:35.094583  169515 api_server.go:131] duration metric: took 9.447369ms to wait for apiserver health ...
	I1004 02:03:35.094591  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:03:35.271828  169515 system_pods.go:59] 8 kube-system pods found
	I1004 02:03:35.271855  169515 system_pods.go:61] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.271862  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.271867  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.271871  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.271875  169515 system_pods.go:61] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.271879  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.271886  169515 system_pods.go:61] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.271891  169515 system_pods.go:61] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.271899  169515 system_pods.go:74] duration metric: took 177.302484ms to wait for pod list to return data ...
	I1004 02:03:35.271906  169515 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:03:35.466915  169515 default_sa.go:45] found service account: "default"
	I1004 02:03:35.466956  169515 default_sa.go:55] duration metric: took 195.042376ms for default service account to be created ...
	I1004 02:03:35.466968  169515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:03:35.669331  169515 system_pods.go:86] 8 kube-system pods found
	I1004 02:03:35.669358  169515 system_pods.go:89] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.669363  169515 system_pods.go:89] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.669368  169515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.669372  169515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.669376  169515 system_pods.go:89] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.669380  169515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.669386  169515 system_pods.go:89] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.669391  169515 system_pods.go:89] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.669397  169515 system_pods.go:126] duration metric: took 202.42259ms to wait for k8s-apps to be running ...
	I1004 02:03:35.669404  169515 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:03:35.669446  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:35.685440  169515 system_svc.go:56] duration metric: took 16.022733ms WaitForService to wait for kubelet.
	I1004 02:03:35.685475  169515 kubeadm.go:581] duration metric: took 5.154237901s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 02:03:35.685502  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:03:35.867523  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 02:03:35.867616  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 02:03:35.867645  169515 node_conditions.go:105] duration metric: took 182.13715ms to run NodePressure ...
	I1004 02:03:35.867672  169515 start.go:228] waiting for startup goroutines ...
	I1004 02:03:35.867711  169515 start.go:233] waiting for cluster config update ...
	I1004 02:03:35.867729  169515 start.go:242] writing updated cluster config ...
	I1004 02:03:35.868000  169515 ssh_runner.go:195] Run: rm -f paused
	I1004 02:03:35.921562  169515 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 02:03:35.924514  169515 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-239802" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:50:01 UTC, ends at Wed 2023-10-04 02:04:31 UTC. --
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.640867166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385071640851168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e2a0b8d2-fb56-4564-8c35-d255e58ac5da name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.641685026Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0b9dd49e-40ed-4ae2-8f3e-3e9116f55006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.641734656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0b9dd49e-40ed-4ae2-8f3e-3e9116f55006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.641882214Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0b9dd49e-40ed-4ae2-8f3e-3e9116f55006 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.681976477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=111bf136-b3f4-46b3-b2d4-f1367d1270f1 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.682064361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=111bf136-b3f4-46b3-b2d4-f1367d1270f1 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.683570602Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2ecac3b8-5f0d-4fc9-a8d9-1a21cbef55f3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.683955863Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385071683941160,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2ecac3b8-5f0d-4fc9-a8d9-1a21cbef55f3 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.684471091Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=32692adf-0d8e-4933-be24-8ca29b3ca1c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.684545608Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=32692adf-0d8e-4933-be24-8ca29b3ca1c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.684723628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=32692adf-0d8e-4933-be24-8ca29b3ca1c0 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.727470063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=76b6afca-8dac-43d0-a999-c4f295edb0b3 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.727561829Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=76b6afca-8dac-43d0-a999-c4f295edb0b3 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.729763471Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e547bb9e-6596-43ca-94e2-00f6bbef050d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.730349089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385071730334173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=e547bb9e-6596-43ca-94e2-00f6bbef050d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.731972313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0fe7345e-feaf-45ab-98d0-d4dd824d81c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.732046043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0fe7345e-feaf-45ab-98d0-d4dd824d81c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.732267248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0fe7345e-feaf-45ab-98d0-d4dd824d81c6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.772564334Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bc7a1160-0fc9-47f6-a6c3-2f0c2b80e4ea name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.772634868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bc7a1160-0fc9-47f6-a6c3-2f0c2b80e4ea name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.774581742Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=34e04124-0e5f-4186-88c1-e3114a41d5f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.775040118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385071775025991,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=34e04124-0e5f-4186-88c1-e3114a41d5f9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.776856548Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e2627b44-4215-4a53-a0fc-611be67d86c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.776901644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e2627b44-4215-4a53-a0fc-611be67d86c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:31 embed-certs-509298 crio[728]: time="2023-10-04 02:04:31.777223412Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e2627b44-4215-4a53-a0fc-611be67d86c9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b97474e8630e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5e11df276a01b       storage-provisioner
	f3316d73aebf8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   12445b59fdb15       coredns-5dd5756b68-79qrq
	a46b80885b26c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   9 minutes ago       Running             kube-proxy                0                   c0526e00426af       kube-proxy-f99th
	7f21b00e9dc48       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   3c6fed7f87557       etcd-embed-certs-509298
	0af148e957984       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   9 minutes ago       Running             kube-scheduler            2                   007f4f9fa55d5       kube-scheduler-embed-certs-509298
	a32990e0a3fdd       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   9 minutes ago       Running             kube-apiserver            2                   1588b854bcd2d       kube-apiserver-embed-certs-509298
	f6d6bd9377fe5       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   9 minutes ago       Running             kube-controller-manager   2                   2bb24e65b5083       kube-controller-manager-embed-certs-509298
	
	* 
	* ==> coredns [f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-509298
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-509298
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=embed-certs-509298
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_55_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:55:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-509298
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 02:04:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:00:39 +0000   Wed, 04 Oct 2023 01:55:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:00:39 +0000   Wed, 04 Oct 2023 01:55:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:00:39 +0000   Wed, 04 Oct 2023 01:55:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:00:39 +0000   Wed, 04 Oct 2023 01:55:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.170
	  Hostname:    embed-certs-509298
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ca3d4c150cb4b8d88c9054d5234c3d2
	  System UUID:                1ca3d4c1-50cb-4b8d-88c9-054d5234c3d2
	  Boot ID:                    63533b45-ed5a-431a-bd38-01bf2e9c1790
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-79qrq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-embed-certs-509298                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-509298             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-509298    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-f99th                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-embed-certs-509298             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-27696               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node embed-certs-509298 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node embed-certs-509298 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node embed-certs-509298 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s  kubelet          Node embed-certs-509298 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m20s  kubelet          Node embed-certs-509298 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-509298 event: Registered Node embed-certs-509298 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.495858] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct 4 01:50] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146804] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.537969] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.791306] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.136700] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.176354] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.117843] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.237591] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +17.532316] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +22.511843] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 4 01:55] systemd-fstab-generator[3455]: Ignoring "noauto" for root device
	[ +10.304311] systemd-fstab-generator[3788]: Ignoring "noauto" for root device
	[ +14.271481] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f] <==
	* {"level":"info","ts":"2023-10-04T01:55:06.168998Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.170:2380"}
	{"level":"info","ts":"2023-10-04T01:55:06.171984Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:55:06.172324Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ffefcf45db661597","initial-advertise-peer-urls":["https://192.168.50.170:2380"],"listen-peer-urls":["https://192.168.50.170:2380"],"advertise-client-urls":["https://192.168.50.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:55:06.792973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffefcf45db661597 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-04T01:55:06.793095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffefcf45db661597 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-04T01:55:06.793236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffefcf45db661597 received MsgPreVoteResp from ffefcf45db661597 at term 1"}
	{"level":"info","ts":"2023-10-04T01:55:06.793283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffefcf45db661597 became candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:55:06.79339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffefcf45db661597 received MsgVoteResp from ffefcf45db661597 at term 2"}
	{"level":"info","ts":"2023-10-04T01:55:06.793422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ffefcf45db661597 became leader at term 2"}
	{"level":"info","ts":"2023-10-04T01:55:06.793448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ffefcf45db661597 elected leader ffefcf45db661597 at term 2"}
	{"level":"info","ts":"2023-10-04T01:55:06.794991Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ffefcf45db661597","local-member-attributes":"{Name:embed-certs-509298 ClientURLs:[https://192.168.50.170:2379]}","request-path":"/0/members/ffefcf45db661597/attributes","cluster-id":"6d889d17c3567f80","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:55:06.795088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:55:06.795656Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:55:06.796822Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:55:06.796867Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T01:55:06.796897Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6d889d17c3567f80","local-member-id":"ffefcf45db661597","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:55:06.79697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:55:06.797021Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T01:55:06.79698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:55:06.797062Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:55:06.798342Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.170:2379"}
	{"level":"info","ts":"2023-10-04T01:58:24.099496Z","caller":"traceutil/trace.go:171","msg":"trace[677724988] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"207.843989ms","start":"2023-10-04T01:58:23.89161Z","end":"2023-10-04T01:58:24.099454Z","steps":["trace[677724988] 'process raft request'  (duration: 207.640893ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:58:24.503559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.020387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T01:58:24.50379Z","caller":"traceutil/trace.go:171","msg":"trace[422866158] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:593; }","duration":"384.347952ms","start":"2023-10-04T01:58:24.119415Z","end":"2023-10-04T01:58:24.503763Z","steps":["trace[422866158] 'range keys from in-memory index tree'  (duration: 383.941467ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:58:24.503944Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:58:24.119395Z","time spent":"384.456238ms","remote":"127.0.0.1:46034","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	
	* 
	* ==> kernel <==
	*  02:04:32 up 14 min,  0 users,  load average: 0.34, 0.40, 0.31
	Linux embed-certs-509298 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f] <==
	* W1004 02:00:09.416664       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:00:09.416770       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:00:09.416798       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:00:09.416927       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:00:09.417047       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:00:09.418245       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:01:08.396327       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:01:09.417852       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:01:09.418083       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:01:09.418209       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:01:09.419008       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:01:09.419246       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:01:09.419290       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:02:08.395724       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 02:03:08.395314       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:03:09.419306       1 handler_proxy.go:93] no RequestInfo found in the context
	W1004 02:03:09.419445       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:03:09.419560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:03:09.419604       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1004 02:03:09.419562       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:03:09.421570       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:04:08.395841       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab] <==
	* I1004 01:58:55.312708       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 01:59:24.725835       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 01:59:25.324392       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 01:59:54.733989       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 01:59:55.336878       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:00:24.740865       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:00:25.349837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:00:54.746767       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:00:55.366488       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:01:20.431561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="516.574µs"
	E1004 02:01:24.755647       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:01:25.376397       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:01:32.421427       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="194.765µs"
	E1004 02:01:54.761686       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:01:55.386396       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:02:24.767904       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:02:25.394800       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:02:54.774422       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:02:55.407349       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:03:24.782263       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:03:25.418721       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:03:54.788954       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:03:55.429080       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:04:24.795710       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:04:25.438658       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61] <==
	* I1004 01:55:28.166208       1 server_others.go:69] "Using iptables proxy"
	I1004 01:55:28.221702       1 node.go:141] Successfully retrieved node IP: 192.168.50.170
	I1004 01:55:28.434278       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:55:28.434353       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:55:28.437589       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:55:28.437672       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:55:28.437869       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:55:28.437904       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:55:28.439989       1 config.go:188] "Starting service config controller"
	I1004 01:55:28.440043       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:55:28.440073       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:55:28.440088       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:55:28.443914       1 config.go:315] "Starting node config controller"
	I1004 01:55:28.443952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:55:28.540415       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:55:28.540491       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:55:28.544397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a] <==
	* W1004 01:55:08.512585       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:55:08.512671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:55:09.315267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:55:09.315327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 01:55:09.364264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.364357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:09.399692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:55:09.399746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 01:55:09.466840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.466900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:09.511859       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.511917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:09.538021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:55:09.538233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 01:55:09.555874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:55:09.555971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 01:55:09.561978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 01:55:09.562031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 01:55:09.632604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:55:09.632728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 01:55:09.677718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.677818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:10.033937       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:55:10.033987       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1004 01:55:11.895035       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:50:01 UTC, ends at Wed 2023-10-04 02:04:32 UTC. --
	Oct 04 02:01:58 embed-certs-509298 kubelet[3795]: E1004 02:01:58.404728    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:02:12 embed-certs-509298 kubelet[3795]: E1004 02:02:12.402477    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:02:12 embed-certs-509298 kubelet[3795]: E1004 02:02:12.539342    3795 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:02:12 embed-certs-509298 kubelet[3795]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:02:12 embed-certs-509298 kubelet[3795]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:02:12 embed-certs-509298 kubelet[3795]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:02:23 embed-certs-509298 kubelet[3795]: E1004 02:02:23.402351    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:02:37 embed-certs-509298 kubelet[3795]: E1004 02:02:37.402269    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:02:49 embed-certs-509298 kubelet[3795]: E1004 02:02:49.402213    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:03:00 embed-certs-509298 kubelet[3795]: E1004 02:03:00.403094    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:03:12 embed-certs-509298 kubelet[3795]: E1004 02:03:12.539916    3795 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:03:12 embed-certs-509298 kubelet[3795]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:03:12 embed-certs-509298 kubelet[3795]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:03:12 embed-certs-509298 kubelet[3795]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:03:13 embed-certs-509298 kubelet[3795]: E1004 02:03:13.402657    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:03:27 embed-certs-509298 kubelet[3795]: E1004 02:03:27.401972    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:03:40 embed-certs-509298 kubelet[3795]: E1004 02:03:40.403307    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:03:52 embed-certs-509298 kubelet[3795]: E1004 02:03:52.404327    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:04:06 embed-certs-509298 kubelet[3795]: E1004 02:04:06.401700    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:04:12 embed-certs-509298 kubelet[3795]: E1004 02:04:12.538044    3795 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:04:12 embed-certs-509298 kubelet[3795]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:04:12 embed-certs-509298 kubelet[3795]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:04:12 embed-certs-509298 kubelet[3795]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:04:18 embed-certs-509298 kubelet[3795]: E1004 02:04:18.402384    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:04:30 embed-certs-509298 kubelet[3795]: E1004 02:04:30.402592    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	
	* 
	* ==> storage-provisioner [0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce] <==
	* I1004 01:55:28.819935       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:55:28.830585       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:55:28.830729       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 01:55:28.848802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 01:55:28.849770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-509298_1a85bd3b-850f-413c-97d5-ee7c672d97e1!
	I1004 01:55:28.849626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23f1fe67-f369-4c37-928b-269ee8b0516f", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-509298_1a85bd3b-850f-413c-97d5-ee7c672d97e1 became leader
	I1004 01:55:28.950294       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-509298_1a85bd3b-850f-413c-97d5-ee7c672d97e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-509298 -n embed-certs-509298
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-509298 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-27696
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-509298 describe pod metrics-server-57f55c9bc5-27696
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-509298 describe pod metrics-server-57f55c9bc5-27696: exit status 1 (66.795558ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-27696" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-509298 describe pod metrics-server-57f55c9bc5-27696: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.59s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1004 01:56:05.194379  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-273516 -n no-preload-273516
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:04:59.032106816 +0000 UTC m=+4891.403137851
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-273516 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-273516 logs -n 25: (1.402620805s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-107182        | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-528457                              | cert-expiration-528457       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-554732 | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | disable-driver-mounts-554732                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-487861             | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-487861                  | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273516                  | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-487861 sudo                              | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-509298                 | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| addons  | enable dashboard -p old-k8s-version-107182             | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:50 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-509298                                  | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-239802  | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC | 04 Oct 23 01:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC |                     |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-239802       | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC | 04 Oct 23 02:03 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:53:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:53:11.828274  169515 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:53:11.828536  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828547  169515 out.go:309] Setting ErrFile to fd 2...
	I1004 01:53:11.828552  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828768  169515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:53:11.829347  169515 out.go:303] Setting JSON to false
	I1004 01:53:11.830376  169515 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9343,"bootTime":1696375049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:53:11.830441  169515 start.go:138] virtualization: kvm guest
	I1004 01:53:11.832711  169515 out.go:177] * [default-k8s-diff-port-239802] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:53:11.834324  169515 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:53:11.835643  169515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:53:11.834361  169515 notify.go:220] Checking for updates...
	I1004 01:53:11.838217  169515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:53:11.839555  169515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:53:11.840846  169515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:53:11.842161  169515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:53:07.280681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:09.778282  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.779681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.843761  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:53:11.844277  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.844360  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.860250  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I1004 01:53:11.860700  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.861256  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.861279  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.861643  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.861866  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.862175  169515 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:53:11.862447  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.862487  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.877262  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1004 01:53:11.877711  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.878333  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.878357  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.878806  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.879014  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.917299  169515 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:53:11.918706  169515 start.go:298] selected driver: kvm2
	I1004 01:53:11.918721  169515 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.918831  169515 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:53:11.919435  169515 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.919506  169515 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:53:11.934986  169515 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:53:11.935329  169515 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:53:11.935365  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:53:11.935379  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:53:11.935399  169515 start_flags.go:321] config:
	{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-23980
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.935580  169515 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.937595  169515 out.go:177] * Starting control plane node default-k8s-diff-port-239802 in cluster default-k8s-diff-port-239802
	I1004 01:53:11.938856  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:53:11.938906  169515 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:53:11.938918  169515 cache.go:57] Caching tarball of preloaded images
	I1004 01:53:11.939005  169515 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:53:11.939019  169515 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:53:11.939123  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:53:11.939343  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:53:11.939424  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 58.221µs
	I1004 01:53:11.939444  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:53:11.939453  169515 fix.go:54] fixHost starting: 
	I1004 01:53:11.939742  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.939789  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.954196  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I1004 01:53:11.954631  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.955177  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.955207  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.955546  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.955732  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.955907  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:53:11.957727  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Running err=<nil>
	W1004 01:53:11.957752  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:53:11.959786  169515 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-239802" VM ...
	I1004 01:53:08.669530  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.168697  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:10.723754  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.223290  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.960962  169515 machine.go:88] provisioning docker machine ...
	I1004 01:53:11.960980  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.961165  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961309  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:53:11.961321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961451  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:53:11.964100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964548  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:49:35 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:53:11.964579  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964700  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:53:11.964891  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965213  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:53:11.965415  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:53:11.965918  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:53:11.965942  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:53:14.858205  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:13.780979  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:16.279971  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.170120  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.170376  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.724119  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:18.223219  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.930132  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:18.779188  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.668906  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:19.669782  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:22.169918  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.724642  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:23.225475  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.010157  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:23.279668  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.778425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.668233  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:26.669315  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.723231  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:28.222973  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:27.082190  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:27.778573  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.779483  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.168734  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:31.169219  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:30.223870  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:32.724030  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.162101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:36.234078  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:32.278768  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:34.279611  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:36.779455  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.669109  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.669923  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.224564  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.723997  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:39.724578  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:38.779567  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:41.278736  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.671432  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:40.168863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.168970  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.223844  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.224215  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.358317  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:43.278799  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.279544  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.169371  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.670033  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.726544  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.222631  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.426196  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:47.282389  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.779291  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.673161  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.170963  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.223796  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.724046  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.506087  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:52.280232  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.778941  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.668512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:55.668997  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:56.223812  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.223985  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:57.578187  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:57.281468  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:59.780369  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.169361  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.171086  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.723767  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.724182  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:03.658082  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:06.730171  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:02.278547  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:04.279504  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:06.779458  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.669174  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.169089  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.224336  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.724614  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:08.780155  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:11.281399  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:09.670536  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.170645  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:10.223678  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.724096  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.810084  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:15.882179  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:13.780199  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.280077  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:14.668216  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.668736  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:15.223755  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:17.223789  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:19.724040  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.780554  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.283185  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.672583  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.169626  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:22.223220  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:24.223653  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.962094  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:25.034104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:23.779529  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:25.785001  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:23.668523  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.170080  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.725426  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:29.224292  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.114102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:28.278824  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.280812  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:28.668973  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.669813  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.724077  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.223673  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.186185  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:32.283313  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.785440  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:33.169511  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:35.170079  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:36.223744  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:38.223824  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.270113  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:37.279625  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:39.779646  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:37.670022  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.170303  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.723833  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.723858  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.723974  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:43.338083  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:42.281698  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.778204  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.779425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.668686  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.671405  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:47.170837  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.418200  167452 pod_ready.go:81] duration metric: took 4m0.000746433s waiting for pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace to be "Ready" ...
	E1004 01:54:46.418242  167452 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:54:46.418266  167452 pod_ready.go:38] duration metric: took 4m6.792871015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:54:46.418310  167452 kubeadm.go:640] restartCluster took 4m30.137827083s
	W1004 01:54:46.418446  167452 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:54:46.418484  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:54:49.418125  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:48.780239  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.284905  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:49.174919  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.675479  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:52.490104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:53.778907  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:55.778958  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:54.169521  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:56.670982  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:58.570115  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:01.642220  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:57.779481  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.782476  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.170012  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:01.670386  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:00.372786  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.954218871s)
	I1004 01:55:00.372881  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:00.387256  167452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:55:00.396756  167452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:55:00.406765  167452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:55:00.406806  167452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 01:55:00.625971  167452 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:55:02.279852  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.281525  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.779641  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.170863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.671473  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:07.722109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:10.794061  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:08.780879  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.283040  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.183572  167452 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 01:55:12.183661  167452 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:55:12.183766  167452 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:55:12.183877  167452 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:55:12.183978  167452 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:55:12.184074  167452 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:55:12.185782  167452 out.go:204]   - Generating certificates and keys ...
	I1004 01:55:12.185896  167452 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:55:12.185952  167452 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:55:12.186040  167452 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:55:12.186118  167452 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:55:12.186210  167452 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:55:12.186309  167452 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:55:12.186400  167452 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:55:12.186483  167452 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:55:12.186608  167452 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:55:12.186728  167452 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:55:12.186790  167452 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:55:12.186869  167452 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:55:12.186944  167452 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:55:12.187022  167452 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:55:12.187094  167452 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:55:12.187174  167452 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:55:12.187302  167452 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:55:12.187369  167452 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:55:12.188941  167452 out.go:204]   - Booting up control plane ...
	I1004 01:55:12.189059  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:55:12.189132  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:55:12.189211  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:55:12.189324  167452 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:55:12.189452  167452 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:55:12.189504  167452 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 01:55:12.189735  167452 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:55:12.189877  167452 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004191 seconds
	I1004 01:55:12.190030  167452 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:55:12.190218  167452 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:55:12.190314  167452 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:55:12.190566  167452 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-509298 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 01:55:12.190670  167452 kubeadm.go:322] [bootstrap-token] Using token: i6ebw8.csx7j4uz10ltteg7
	I1004 01:55:12.192239  167452 out.go:204]   - Configuring RBAC rules ...
	I1004 01:55:12.192387  167452 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:55:12.192462  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 01:55:12.192608  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:55:12.192774  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:55:12.192904  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:55:12.192996  167452 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:55:12.193138  167452 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 01:55:12.193211  167452 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:55:12.193271  167452 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:55:12.193278  167452 kubeadm.go:322] 
	I1004 01:55:12.193325  167452 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:55:12.193332  167452 kubeadm.go:322] 
	I1004 01:55:12.193398  167452 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:55:12.193404  167452 kubeadm.go:322] 
	I1004 01:55:12.193424  167452 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:55:12.193475  167452 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:55:12.193517  167452 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:55:12.193523  167452 kubeadm.go:322] 
	I1004 01:55:12.193565  167452 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 01:55:12.193571  167452 kubeadm.go:322] 
	I1004 01:55:12.193628  167452 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 01:55:12.193638  167452 kubeadm.go:322] 
	I1004 01:55:12.193704  167452 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:55:12.193783  167452 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:55:12.193895  167452 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:55:12.193906  167452 kubeadm.go:322] 
	I1004 01:55:12.194003  167452 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 01:55:12.194073  167452 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:55:12.194080  167452 kubeadm.go:322] 
	I1004 01:55:12.194169  167452 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194254  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:55:12.194273  167452 kubeadm.go:322] 	--control-plane 
	I1004 01:55:12.194279  167452 kubeadm.go:322] 
	I1004 01:55:12.194352  167452 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:55:12.194360  167452 kubeadm.go:322] 
	I1004 01:55:12.194428  167452 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194540  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:55:12.194563  167452 cni.go:84] Creating CNI manager for ""
	I1004 01:55:12.194572  167452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:55:12.196296  167452 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:55:09.172018  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.670011  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.197574  167452 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:55:12.219217  167452 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:55:12.298578  167452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:55:12.298671  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.298685  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=embed-certs-509298 minikube.k8s.io/updated_at=2023_10_04T01_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.379573  167452 ops.go:34] apiserver oom_adj: -16
	I1004 01:55:12.664606  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.821682  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.427770  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.928385  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.428534  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.927827  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.780253  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.286195  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:14.169232  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.669256  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:15.428102  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:15.928404  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.428316  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.928095  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.428581  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.928158  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.428061  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.927815  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.428285  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.927597  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.874102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:19.946137  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:18.779212  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.780120  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:18.671773  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:21.169373  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.428231  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:20.927662  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.427644  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.927803  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.427969  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.928321  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.428088  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.928382  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.427968  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.686625  167452 kubeadm.go:1081] duration metric: took 12.388021854s to wait for elevateKubeSystemPrivileges.
	I1004 01:55:24.686650  167452 kubeadm.go:406] StartCluster complete in 5m8.467148399s
	I1004 01:55:24.686670  167452 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.686772  167452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:55:24.689005  167452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.691164  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:55:24.691505  167452 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:55:24.691524  167452 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:55:24.691609  167452 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-509298"
	I1004 01:55:24.691645  167452 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-509298"
	W1004 01:55:24.691666  167452 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:55:24.691681  167452 addons.go:69] Setting default-storageclass=true in profile "embed-certs-509298"
	I1004 01:55:24.691711  167452 addons.go:69] Setting metrics-server=true in profile "embed-certs-509298"
	I1004 01:55:24.691721  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.691750  167452 addons.go:231] Setting addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:24.691713  167452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-509298"
	W1004 01:55:24.691763  167452 addons.go:240] addon metrics-server should already be in state true
	I1004 01:55:24.692075  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692471  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692522  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692566  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692591  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.710712  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1004 01:55:24.711360  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.711863  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1004 01:55:24.712115  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712145  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.712236  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.712668  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.712925  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712950  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.713327  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.713364  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713391  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.713880  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713918  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.715208  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I1004 01:55:24.715594  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.716155  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.716185  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.716523  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.716732  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.720408  167452 addons.go:231] Setting addon default-storageclass=true in "embed-certs-509298"
	W1004 01:55:24.720590  167452 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:55:24.720630  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.720922  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.720963  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.731384  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1004 01:55:24.732142  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.732918  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.732946  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.733348  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.733666  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I1004 01:55:24.733699  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.734163  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.734711  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.734737  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.735163  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.735400  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.735991  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.738353  167452 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:55:24.740203  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:55:24.740222  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:55:24.737643  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.740244  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.742072  167452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:55:24.743597  167452 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:24.743626  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:55:24.743648  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.744536  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745006  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.745048  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745279  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.745519  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.745719  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.745878  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.748789  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.748842  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I1004 01:55:24.749267  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.749298  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.749354  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.749818  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.749892  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.749978  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.750177  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.750270  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.750325  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.750752  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.750802  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.751018  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.768787  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I1004 01:55:24.769394  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.770412  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.770438  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.770803  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.770982  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.772831  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.773101  167452 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.773120  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:55:24.773138  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.776980  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777337  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.777390  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777623  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.777827  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.778030  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.778218  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.827144  167452 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-509298" context rescaled to 1 replicas
	I1004 01:55:24.827188  167452 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.170 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:55:24.829039  167452 out.go:177] * Verifying Kubernetes components...
	I1004 01:55:24.830422  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:24.912112  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:55:24.912145  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:55:24.941943  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.953635  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:55:24.953669  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:55:24.964038  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:25.010973  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.011004  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:55:25.069236  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.073447  167452 node_ready.go:35] waiting up to 6m0s for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.073533  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:55:26.026178  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:23.280683  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.280934  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.276517  167452 node_ready.go:49] node "embed-certs-509298" has status "Ready":"True"
	I1004 01:55:25.276548  167452 node_ready.go:38] duration metric: took 203.068295ms waiting for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.276561  167452 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:25.459727  167452 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:26.648518  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706528042s)
	I1004 01:55:26.648633  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.648655  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.648984  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649002  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.649012  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.649021  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.649326  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:26.649367  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649378  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.670495  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.670520  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.670831  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.670890  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318331  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.35425456s)
	I1004 01:55:27.318392  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318407  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318442  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.249161738s)
	I1004 01:55:27.318496  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318502  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244935012s)
	I1004 01:55:27.318516  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318526  167452 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 01:55:27.318839  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318886  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318904  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318915  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318934  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318944  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318946  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318966  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318980  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318993  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.319203  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319225  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319232  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319242  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.319257  167452 addons.go:467] Verifying addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:27.319290  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319300  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.321408  167452 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1004 01:55:23.171045  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.171137  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.323360  167452 addons.go:502] enable addons completed in 2.631835233s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1004 01:55:27.504611  167452 pod_ready.go:102] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:28.987732  167452 pod_ready.go:92] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.987757  167452 pod_ready.go:81] duration metric: took 3.527990687s waiting for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.987769  167452 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993933  167452 pod_ready.go:92] pod "etcd-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.993953  167452 pod_ready.go:81] duration metric: took 6.17579ms waiting for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993966  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000725  167452 pod_ready.go:92] pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.000747  167452 pod_ready.go:81] duration metric: took 6.77205ms waiting for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000759  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005757  167452 pod_ready.go:92] pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.005779  167452 pod_ready.go:81] duration metric: took 5.011182ms waiting for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005790  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010519  167452 pod_ready.go:92] pod "kube-proxy-f99th" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.010537  167452 pod_ready.go:81] duration metric: took 4.738537ms waiting for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010548  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383772  167452 pod_ready.go:92] pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.383795  167452 pod_ready.go:81] duration metric: took 373.240101ms waiting for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383803  167452 pod_ready.go:38] duration metric: took 4.107228637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:29.383834  167452 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:29.383882  167452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:29.399227  167452 api_server.go:72] duration metric: took 4.572006648s to wait for apiserver process to appear ...
	I1004 01:55:29.399259  167452 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:29.399279  167452 api_server.go:253] Checking apiserver healthz at https://192.168.50.170:8443/healthz ...
	I1004 01:55:29.405336  167452 api_server.go:279] https://192.168.50.170:8443/healthz returned 200:
	ok
	I1004 01:55:29.406768  167452 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:29.406794  167452 api_server.go:131] duration metric: took 7.526875ms to wait for apiserver health ...
	I1004 01:55:29.406804  167452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:29.586194  167452 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:29.586225  167452 system_pods.go:61] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.586230  167452 system_pods.go:61] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.586236  167452 system_pods.go:61] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.586241  167452 system_pods.go:61] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.586248  167452 system_pods.go:61] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.586253  167452 system_pods.go:61] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.586261  167452 system_pods.go:61] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.586269  167452 system_pods.go:61] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.586276  167452 system_pods.go:74] duration metric: took 179.466307ms to wait for pod list to return data ...
	I1004 01:55:29.586289  167452 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:29.782372  167452 default_sa.go:45] found service account: "default"
	I1004 01:55:29.782395  167452 default_sa.go:55] duration metric: took 196.098004ms for default service account to be created ...
	I1004 01:55:29.782403  167452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:29.988230  167452 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:29.988261  167452 system_pods.go:89] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.988267  167452 system_pods.go:89] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.988271  167452 system_pods.go:89] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.988276  167452 system_pods.go:89] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.988281  167452 system_pods.go:89] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.988285  167452 system_pods.go:89] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.988298  167452 system_pods.go:89] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.988305  167452 system_pods.go:89] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.988313  167452 system_pods.go:126] duration metric: took 205.9045ms to wait for k8s-apps to be running ...
	I1004 01:55:29.988323  167452 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:29.988369  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:30.003487  167452 system_svc.go:56] duration metric: took 15.153598ms WaitForService to wait for kubelet.
	I1004 01:55:30.003513  167452 kubeadm.go:581] duration metric: took 5.176299768s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:30.003534  167452 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:30.184152  167452 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:30.184177  167452 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:30.184186  167452 node_conditions.go:105] duration metric: took 180.648418ms to run NodePressure ...
	I1004 01:55:30.184198  167452 start.go:228] waiting for startup goroutines ...
	I1004 01:55:30.184204  167452 start.go:233] waiting for cluster config update ...
	I1004 01:55:30.184213  167452 start.go:242] writing updated cluster config ...
	I1004 01:55:30.184486  167452 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:30.233803  167452 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:30.235636  167452 out.go:177] * Done! kubectl is now configured to use "embed-certs-509298" cluster and "default" namespace by default
	I1004 01:55:29.098156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:27.779362  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.779502  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:31.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.670021  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.678512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:32.172222  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:35.178103  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:34.279433  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:36.781532  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:34.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:37.170113  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:38.254127  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:39.278584  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.279085  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:39.668721  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.670095  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:44.330119  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:43.780710  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:45.782354  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.472905  166755 pod_ready.go:81] duration metric: took 4m0.000518679s waiting for pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:46.472936  166755 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:46.472946  166755 pod_ready.go:38] duration metric: took 4m5.201194434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:46.472975  166755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:46.473020  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:46.473075  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:46.533201  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:46.533233  166755 cri.go:89] found id: ""
	I1004 01:55:46.533243  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:46.533304  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.538613  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:46.538673  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:46.580801  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:46.580826  166755 cri.go:89] found id: ""
	I1004 01:55:46.580834  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:46.580896  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.586423  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:46.586510  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:46.645487  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:46.645526  166755 cri.go:89] found id: ""
	I1004 01:55:46.645535  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:46.645618  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.650643  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:46.650719  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:46.693457  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:46.693482  166755 cri.go:89] found id: ""
	I1004 01:55:46.693492  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:46.693553  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.698463  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:46.698538  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:46.744251  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:46.744279  166755 cri.go:89] found id: ""
	I1004 01:55:46.744289  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:46.744353  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.749343  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:46.749419  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:46.792717  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:46.792745  166755 cri.go:89] found id: ""
	I1004 01:55:46.792755  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:46.792820  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.797417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:46.797492  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:46.843004  166755 cri.go:89] found id: ""
	I1004 01:55:46.843033  166755 logs.go:284] 0 containers: []
	W1004 01:55:46.843044  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:46.843051  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:46.843114  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:44.169475  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.171848  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:47.402086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:46.883372  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:46.883397  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.883405  166755 cri.go:89] found id: ""
	I1004 01:55:46.883415  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:46.883476  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.888350  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.892981  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:46.893010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.936801  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:46.936829  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:46.983092  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:46.983124  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:46.997604  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:46.997634  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:47.041461  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:47.041500  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:47.098192  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:47.098234  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:47.139982  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:47.140010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:47.184753  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:47.184789  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:47.242417  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:47.242456  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:47.290664  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:47.290696  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:47.332998  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:47.333035  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:47.779448  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:47.779490  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:47.951031  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:47.951067  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.505155  166755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:50.522774  166755 api_server.go:72] duration metric: took 4m16.635946913s to wait for apiserver process to appear ...
	I1004 01:55:50.522804  166755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:50.522848  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:50.522929  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:50.565196  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.565220  166755 cri.go:89] found id: ""
	I1004 01:55:50.565232  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:50.565288  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.569426  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:50.569488  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:50.608113  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:50.608138  166755 cri.go:89] found id: ""
	I1004 01:55:50.608147  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:50.608194  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.612671  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:50.612730  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:50.659777  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.659806  166755 cri.go:89] found id: ""
	I1004 01:55:50.659817  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:50.659888  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.664188  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:50.664260  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:50.709318  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:50.709346  166755 cri.go:89] found id: ""
	I1004 01:55:50.709358  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:50.709422  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.713604  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:50.713674  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:50.757565  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:50.757597  166755 cri.go:89] found id: ""
	I1004 01:55:50.757607  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:50.757666  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.761646  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:50.761711  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:50.802683  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:50.802712  166755 cri.go:89] found id: ""
	I1004 01:55:50.802722  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:50.802785  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.807369  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:50.807443  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:50.849917  166755 cri.go:89] found id: ""
	I1004 01:55:50.849952  166755 logs.go:284] 0 containers: []
	W1004 01:55:50.849965  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:50.849974  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:50.850042  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:50.889329  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:50.889353  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:50.889357  166755 cri.go:89] found id: ""
	I1004 01:55:50.889365  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:50.889489  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.894295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.898319  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:50.898345  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:50.950303  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:50.950339  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.989731  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:50.989767  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:51.036483  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:51.036526  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:51.094053  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:51.094109  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:51.234887  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:51.234922  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:51.283233  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:51.283276  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:51.340569  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:51.340610  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:51.751585  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:51.751629  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:51.765404  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:51.765446  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:51.813579  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:51.813611  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:51.853408  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:51.853458  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:48.670114  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:51.169274  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:53.482075  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:56.554101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:51.899649  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:51.899686  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.447493  166755 api_server.go:253] Checking apiserver healthz at https://192.168.83.165:8443/healthz ...
	I1004 01:55:54.453104  166755 api_server.go:279] https://192.168.83.165:8443/healthz returned 200:
	ok
	I1004 01:55:54.455299  166755 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:54.455327  166755 api_server.go:131] duration metric: took 3.932514868s to wait for apiserver health ...
	I1004 01:55:54.455338  166755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:54.455368  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:54.455431  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:54.501159  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:54.501180  166755 cri.go:89] found id: ""
	I1004 01:55:54.501188  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:54.501250  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.506342  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:54.506418  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:54.548780  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:54.548801  166755 cri.go:89] found id: ""
	I1004 01:55:54.548808  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:54.548863  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.560318  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:54.560397  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:54.606477  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:54.606509  166755 cri.go:89] found id: ""
	I1004 01:55:54.606521  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:54.606581  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.611004  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:54.611069  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:54.657003  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.657031  166755 cri.go:89] found id: ""
	I1004 01:55:54.657041  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:54.657106  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.661386  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:54.661459  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:54.713209  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:54.713237  166755 cri.go:89] found id: ""
	I1004 01:55:54.713246  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:54.713295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.718417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:54.718489  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:54.767945  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:54.767969  166755 cri.go:89] found id: ""
	I1004 01:55:54.767979  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:54.768040  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.772488  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:54.772576  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:54.823905  166755 cri.go:89] found id: ""
	I1004 01:55:54.823935  166755 logs.go:284] 0 containers: []
	W1004 01:55:54.823945  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:54.823954  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:54.824017  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:54.878037  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:54.878069  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:54.878076  166755 cri.go:89] found id: ""
	I1004 01:55:54.878086  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:54.878146  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.883456  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.887685  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:54.887708  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:55.021714  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:55.021761  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:55.066557  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:55.066595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:55.125278  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:55.125336  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:55.170570  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:55.170607  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:55.212833  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:55.212866  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:55.552035  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:55.552080  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:55.601698  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:55.601738  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:55.662745  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:55.662786  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:55.707632  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:55.707665  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:55.746461  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:55.746489  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:55.809111  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:55.809150  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:55.850557  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:55.850595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:53.670067  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:55.670340  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:58.374828  166755 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:58.374864  166755 system_pods.go:61] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.374871  166755 system_pods.go:61] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.374878  166755 system_pods.go:61] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.374885  166755 system_pods.go:61] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.374891  166755 system_pods.go:61] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.374898  166755 system_pods.go:61] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.374909  166755 system_pods.go:61] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.374919  166755 system_pods.go:61] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.374934  166755 system_pods.go:74] duration metric: took 3.919586902s to wait for pod list to return data ...
	I1004 01:55:58.374943  166755 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:58.379203  166755 default_sa.go:45] found service account: "default"
	I1004 01:55:58.379228  166755 default_sa.go:55] duration metric: took 4.271125ms for default service account to be created ...
	I1004 01:55:58.379237  166755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:58.389346  166755 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:58.389369  166755 system_pods.go:89] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.389375  166755 system_pods.go:89] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.389379  166755 system_pods.go:89] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.389384  166755 system_pods.go:89] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.389388  166755 system_pods.go:89] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.389391  166755 system_pods.go:89] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.389399  166755 system_pods.go:89] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.389404  166755 system_pods.go:89] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.389411  166755 system_pods.go:126] duration metric: took 10.168718ms to wait for k8s-apps to be running ...
	I1004 01:55:58.389422  166755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:58.389467  166755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:58.410785  166755 system_svc.go:56] duration metric: took 21.353423ms WaitForService to wait for kubelet.
	I1004 01:55:58.410814  166755 kubeadm.go:581] duration metric: took 4m24.523994722s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:58.410840  166755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:58.414873  166755 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:58.414899  166755 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:58.414913  166755 node_conditions.go:105] duration metric: took 4.067596ms to run NodePressure ...
	I1004 01:55:58.414927  166755 start.go:228] waiting for startup goroutines ...
	I1004 01:55:58.414936  166755 start.go:233] waiting for cluster config update ...
	I1004 01:55:58.414948  166755 start.go:242] writing updated cluster config ...
	I1004 01:55:58.415228  166755 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:58.469095  166755 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:58.470860  166755 out.go:177] * Done! kubectl is now configured to use "no-preload-273516" cluster and "default" namespace by default
	I1004 01:55:57.863028  167496 pod_ready.go:81] duration metric: took 4m0.000377885s waiting for pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:57.863064  167496 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:57.863085  167496 pod_ready.go:38] duration metric: took 4m1.198718353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:57.863115  167496 kubeadm.go:640] restartCluster took 5m18.524534819s
	W1004 01:55:57.863173  167496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:55:57.863207  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:56:02.773154  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.909900495s)
	I1004 01:56:02.773229  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:02.786455  167496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:56:02.796780  167496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:56:02.806618  167496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:56:02.806677  167496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1004 01:56:02.872853  167496 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1004 01:56:02.872972  167496 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:56:03.024967  167496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:56:03.025128  167496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:56:03.025294  167496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:56:03.249926  167496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:56:03.251503  167496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:56:03.259788  167496 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1004 01:56:03.380740  167496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:56:03.382796  167496 out.go:204]   - Generating certificates and keys ...
	I1004 01:56:03.382964  167496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:56:03.383087  167496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:56:03.383195  167496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:56:03.383291  167496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:56:03.383404  167496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:56:03.383494  167496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:56:03.383899  167496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:56:03.384184  167496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:56:03.384678  167496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:56:03.385233  167496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:56:03.385302  167496 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:56:03.385358  167496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:56:03.892124  167496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:56:04.106548  167496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:56:04.323375  167496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:56:04.510112  167496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:56:04.512389  167496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:56:02.634095  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:05.710104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:04.514200  167496 out.go:204]   - Booting up control plane ...
	I1004 01:56:04.514318  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:56:04.523675  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:56:04.534185  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:56:04.535396  167496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:56:04.551484  167496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:56:11.786134  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:14.564099  167496 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.011014 seconds
	I1004 01:56:14.564257  167496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:56:14.578656  167496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:56:15.106513  167496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:56:15.106688  167496 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-107182 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1004 01:56:15.616926  167496 kubeadm.go:322] [bootstrap-token] Using token: ocks1c.c9c0w76e1jxk27wy
	I1004 01:56:15.619692  167496 out.go:204]   - Configuring RBAC rules ...
	I1004 01:56:15.619849  167496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:56:15.627037  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:56:15.631821  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:56:15.635639  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:56:15.641343  167496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:56:15.709440  167496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:56:16.046524  167496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:56:16.046544  167496 kubeadm.go:322] 
	I1004 01:56:16.046605  167496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:56:16.046616  167496 kubeadm.go:322] 
	I1004 01:56:16.046691  167496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:56:16.046698  167496 kubeadm.go:322] 
	I1004 01:56:16.046727  167496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:56:16.046781  167496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:56:16.046877  167496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:56:16.046902  167496 kubeadm.go:322] 
	I1004 01:56:16.046980  167496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:56:16.047101  167496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:56:16.047198  167496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:56:16.047210  167496 kubeadm.go:322] 
	I1004 01:56:16.047316  167496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1004 01:56:16.047429  167496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:56:16.047448  167496 kubeadm.go:322] 
	I1004 01:56:16.047560  167496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.047736  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:56:16.047783  167496 kubeadm.go:322]     --control-plane 	  
	I1004 01:56:16.047790  167496 kubeadm.go:322] 
	I1004 01:56:16.047912  167496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:56:16.047926  167496 kubeadm.go:322] 
	I1004 01:56:16.048006  167496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.048141  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:56:16.048764  167496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:56:16.048792  167496 cni.go:84] Creating CNI manager for ""
	I1004 01:56:16.048803  167496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:56:16.051468  167496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:56:14.858093  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:16.052923  167496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:56:16.062452  167496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:56:16.083093  167496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:56:16.083231  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.083232  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=old-k8s-version-107182 minikube.k8s.io/updated_at=2023_10_04T01_56_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.097641  167496 ops.go:34] apiserver oom_adj: -16
	I1004 01:56:16.345591  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.432507  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:17.021142  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.938186  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:17.521246  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.020458  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.521120  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.020993  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.521313  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.020752  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.520524  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.020817  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.521038  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:22.020893  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.014159  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:22.520834  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.021375  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.521450  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.021541  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.521194  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.021420  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.521388  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.020861  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.520474  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:27.020520  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.094110  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:27.520733  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.020857  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.520471  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.020869  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.520801  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.020670  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.521376  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.021462  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.521133  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.021118  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.139808  167496 kubeadm.go:1081] duration metric: took 16.056644408s to wait for elevateKubeSystemPrivileges.
	I1004 01:56:32.139853  167496 kubeadm.go:406] StartCluster complete in 5m52.878327636s
	I1004 01:56:32.139879  167496 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.139983  167496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:56:32.143255  167496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.143507  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:56:32.143608  167496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:56:32.143692  167496 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143710  167496 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-107182"
	I1004 01:56:32.143708  167496 config.go:182] Loaded profile config "old-k8s-version-107182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1004 01:56:32.143717  167496 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:56:32.143732  167496 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143751  167496 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-107182"
	W1004 01:56:32.143762  167496 addons.go:240] addon metrics-server should already be in state true
	I1004 01:56:32.143777  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143807  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143717  167496 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143830  167496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-107182"
	I1004 01:56:32.144169  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144206  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144216  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144236  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144237  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144317  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.161736  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1004 01:56:32.161739  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I1004 01:56:32.162384  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162494  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162735  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I1004 01:56:32.163007  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163024  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163156  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163168  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163232  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.163731  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163747  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163809  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.163851  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164091  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164163  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.164565  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.164611  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.165506  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.165553  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.168699  167496 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-107182"
	W1004 01:56:32.168721  167496 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:56:32.168751  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.169121  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.169148  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.187125  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I1004 01:56:32.187814  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188164  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I1004 01:56:32.188441  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.188462  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.188705  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188823  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1004 01:56:32.188990  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.189161  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.189340  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189357  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189428  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.189669  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189688  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189750  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190009  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.190037  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190736  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.190776  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.191392  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.193250  167496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:56:32.192019  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.194795  167496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.194811  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:56:32.194833  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196365  167496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:56:32.197757  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:56:32.197778  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:56:32.197798  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196532  167496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-107182" context rescaled to 1 replicas
	I1004 01:56:32.197859  167496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:56:32.199796  167496 out.go:177] * Verifying Kubernetes components...
	I1004 01:56:32.201368  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:32.202167  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202462  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202766  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.202794  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203229  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203304  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.203321  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203485  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203677  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.203744  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.204034  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204104  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204194  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.204755  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.211128  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I1004 01:56:32.211596  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.212134  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.212157  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.212528  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.212740  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.214335  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.214592  167496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.214608  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:56:32.214627  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.217280  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.217751  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.217781  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.218036  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.218202  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.218378  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.218528  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.390605  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.392051  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.434602  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:56:32.434629  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:56:32.469744  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:56:32.469793  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:56:32.488555  167496 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.489370  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:56:32.500794  167496 node_ready.go:49] node "old-k8s-version-107182" has status "Ready":"True"
	I1004 01:56:32.500818  167496 node_ready.go:38] duration metric: took 12.232731ms waiting for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.500828  167496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:32.514535  167496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:32.515832  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:32.515859  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:56:32.582811  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:33.449546  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05890047s)
	I1004 01:56:33.449619  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.449635  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450076  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450100  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450113  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.450115  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.450139  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450431  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450454  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450503  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.468938  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.468964  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.469311  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.469332  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.700534  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308435267s)
	I1004 01:56:33.700563  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.211163368s)
	I1004 01:56:33.700582  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.700596  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.700593  167496 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1004 01:56:33.700975  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.700998  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.701010  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.701012  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701021  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.701273  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701321  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.701330  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823328  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240468144s)
	I1004 01:56:33.823384  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823398  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.823769  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.823805  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823819  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823832  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.824142  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.824164  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.824176  167496 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-107182"
	I1004 01:56:33.825973  167496 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 01:56:33.162156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:33.827977  167496 addons.go:502] enable addons completed in 1.684381662s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 01:56:34.532496  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:37.031254  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:39.242136  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:39.031853  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:41.531371  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:42.314165  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:44.032920  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:44.533712  167496 pod_ready.go:92] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.533740  167496 pod_ready.go:81] duration metric: took 12.019178851s waiting for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.533753  167496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539300  167496 pod_ready.go:92] pod "kube-proxy-8lcf5" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.539327  167496 pod_ready.go:81] duration metric: took 5.564927ms waiting for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539337  167496 pod_ready.go:38] duration metric: took 12.038496722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:44.539360  167496 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:56:44.539419  167496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:56:44.554851  167496 api_server.go:72] duration metric: took 12.356945821s to wait for apiserver process to appear ...
	I1004 01:56:44.554881  167496 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:56:44.554900  167496 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I1004 01:56:44.562352  167496 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I1004 01:56:44.563304  167496 api_server.go:141] control plane version: v1.16.0
	I1004 01:56:44.563333  167496 api_server.go:131] duration metric: took 8.444498ms to wait for apiserver health ...
	I1004 01:56:44.563344  167496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:56:44.567672  167496 system_pods.go:59] 4 kube-system pods found
	I1004 01:56:44.567701  167496 system_pods.go:61] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.567708  167496 system_pods.go:61] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.567719  167496 system_pods.go:61] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.567728  167496 system_pods.go:61] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.567736  167496 system_pods.go:74] duration metric: took 4.384195ms to wait for pod list to return data ...
	I1004 01:56:44.567746  167496 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:56:44.570566  167496 default_sa.go:45] found service account: "default"
	I1004 01:56:44.570597  167496 default_sa.go:55] duration metric: took 2.843182ms for default service account to be created ...
	I1004 01:56:44.570608  167496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:56:44.575497  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.575524  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.575534  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.575543  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.575552  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.575572  167496 retry.go:31] will retry after 201.187376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:44.781105  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.781140  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.781146  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.781155  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.781162  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.781179  167496 retry.go:31] will retry after 304.433498ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.090030  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.090055  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.090061  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.090067  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.090073  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.090088  167496 retry.go:31] will retry after 344.077296ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.439684  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.439712  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.439717  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.439723  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.439729  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.439743  167496 retry.go:31] will retry after 379.883887ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.824813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.824839  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.824844  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.824853  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.824859  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.824873  167496 retry.go:31] will retry after 650.141708ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:46.480447  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:46.480473  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:46.480478  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:46.480486  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:46.480492  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:46.480507  167496 retry.go:31] will retry after 870.616376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:47.356424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:47.356452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:47.356457  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:47.356464  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:47.356470  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:47.356486  167496 retry.go:31] will retry after 972.499927ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:48.394163  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:51.466067  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:48.333234  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:48.333263  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:48.333269  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:48.333276  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:48.333282  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:48.333296  167496 retry.go:31] will retry after 1.071674914s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:49.410813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:49.410843  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:49.410853  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:49.410864  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:49.410873  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:49.410892  167496 retry.go:31] will retry after 1.833649065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:51.251023  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:51.251046  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:51.251052  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:51.251058  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:51.251065  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:51.251080  167496 retry.go:31] will retry after 1.914402614s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:53.170633  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:53.170675  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:53.170684  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:53.170697  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:53.170706  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:53.170727  167496 retry.go:31] will retry after 2.900802753s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:56.077479  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:56.077505  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:56.077510  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:56.077517  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:56.077523  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:56.077539  167496 retry.go:31] will retry after 2.931373296s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:57.546142  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:00.618191  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:59.014602  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:59.014631  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:59.014639  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:59.014650  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:59.014658  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:59.014679  167496 retry.go:31] will retry after 3.641834809s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:06.698118  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:02.662919  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:02.662957  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:02.662962  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:02.662978  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:02.662986  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:02.663000  167496 retry.go:31] will retry after 5.249216721s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:09.770058  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:07.918510  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:07.918540  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:07.918545  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:07.918551  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:07.918558  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:07.918575  167496 retry.go:31] will retry after 5.21551618s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:15.850131  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:13.139424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:13.139452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:13.139461  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:13.139470  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:13.139480  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:13.139499  167496 retry.go:31] will retry after 6.379920631s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:18.922143  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:19.525272  167496 system_pods.go:86] 5 kube-system pods found
	I1004 01:57:19.525311  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:19.525322  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Pending
	I1004 01:57:19.525329  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:19.525340  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:19.525350  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:19.525372  167496 retry.go:31] will retry after 7.200178423s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:25.002152  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:26.734572  167496 system_pods.go:86] 6 kube-system pods found
	I1004 01:57:26.734603  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:26.734610  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:26.734615  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:26.734619  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:26.734626  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:26.734640  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:26.734662  167496 retry.go:31] will retry after 10.892871067s: missing components: etcd, kube-apiserver
	I1004 01:57:28.078109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:34.158104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:37.634963  167496 system_pods.go:86] 8 kube-system pods found
	I1004 01:57:37.634993  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:37.634998  167496 system_pods.go:89] "etcd-old-k8s-version-107182" [18310540-21e4-4225-9ce0-e662fae16ca5] Running
	I1004 01:57:37.635003  167496 system_pods.go:89] "kube-apiserver-old-k8s-version-107182" [7418c38e-cae2-4d96-bb43-6827c37fc3dd] Running
	I1004 01:57:37.635008  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:37.635012  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:37.635015  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:37.635023  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:37.635028  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:37.635035  167496 system_pods.go:126] duration metric: took 53.064420406s to wait for k8s-apps to be running ...
	I1004 01:57:37.635042  167496 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:57:37.635088  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:57:37.654311  167496 system_svc.go:56] duration metric: took 19.259695ms WaitForService to wait for kubelet.
	I1004 01:57:37.654335  167496 kubeadm.go:581] duration metric: took 1m5.456439597s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:57:37.654358  167496 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:57:37.658645  167496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:57:37.658691  167496 node_conditions.go:123] node cpu capacity is 2
	I1004 01:57:37.658730  167496 node_conditions.go:105] duration metric: took 4.365872ms to run NodePressure ...
	I1004 01:57:37.658744  167496 start.go:228] waiting for startup goroutines ...
	I1004 01:57:37.658753  167496 start.go:233] waiting for cluster config update ...
	I1004 01:57:37.658763  167496 start.go:242] writing updated cluster config ...
	I1004 01:57:37.659093  167496 ssh_runner.go:195] Run: rm -f paused
	I1004 01:57:37.707603  167496 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1004 01:57:37.709678  167496 out.go:177] 
	W1004 01:57:37.711433  167496 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1004 01:57:37.713148  167496 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1004 01:57:37.714765  167496 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-107182" cluster and "default" namespace by default
	I1004 01:57:37.226085  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:43.306106  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:46.378086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:49.379613  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:57:49.379686  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:57:49.381326  169515 machine.go:91] provisioned docker machine in 4m37.42034364s
	I1004 01:57:49.381400  169515 fix.go:56] fixHost completed within 4m37.441947276s
	I1004 01:57:49.381413  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 4m37.441976851s
	W1004 01:57:49.381431  169515 start.go:688] error starting host: provision: host is not running
	W1004 01:57:49.381511  169515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1004 01:57:49.381527  169515 start.go:703] Will try again in 5 seconds ...
	I1004 01:57:54.381970  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:57:54.382105  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 82.376µs
	I1004 01:57:54.382139  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:57:54.382148  169515 fix.go:54] fixHost starting: 
	I1004 01:57:54.382415  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:57:54.382441  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:57:54.397922  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I1004 01:57:54.398391  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:57:54.398857  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:57:54.398879  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:57:54.399227  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:57:54.399426  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:57:54.399606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:57:54.401353  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Stopped err=<nil>
	I1004 01:57:54.401379  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	W1004 01:57:54.401556  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:57:54.403451  169515 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-239802" ...
	I1004 01:57:54.404883  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Start
	I1004 01:57:54.405065  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring networks are active...
	I1004 01:57:54.405797  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network default is active
	I1004 01:57:54.406184  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network mk-default-k8s-diff-port-239802 is active
	I1004 01:57:54.406630  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Getting domain xml...
	I1004 01:57:54.407374  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Creating domain...
	I1004 01:57:55.768364  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting to get IP...
	I1004 01:57:55.769252  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769744  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769819  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.769720  170429 retry.go:31] will retry after 205.391459ms: waiting for machine to come up
	I1004 01:57:55.977260  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977696  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977721  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.977651  170429 retry.go:31] will retry after 308.679034ms: waiting for machine to come up
	I1004 01:57:56.288223  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288707  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288740  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.288656  170429 retry.go:31] will retry after 419.166959ms: waiting for machine to come up
	I1004 01:57:56.708911  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709549  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709581  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.709483  170429 retry.go:31] will retry after 402.015435ms: waiting for machine to come up
	I1004 01:57:57.113100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113682  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113735  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.113608  170429 retry.go:31] will retry after 555.795777ms: waiting for machine to come up
	I1004 01:57:57.671427  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672087  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672124  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.671985  170429 retry.go:31] will retry after 891.745334ms: waiting for machine to come up
	I1004 01:57:58.564986  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565501  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565533  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:58.565436  170429 retry.go:31] will retry after 897.272137ms: waiting for machine to come up
	I1004 01:57:59.465110  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465742  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465773  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:59.465695  170429 retry.go:31] will retry after 1.042370898s: waiting for machine to come up
	I1004 01:58:00.509812  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510320  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:00.510296  170429 retry.go:31] will retry after 1.512718285s: waiting for machine to come up
	I1004 01:58:02.024160  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024566  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:02.024502  170429 retry.go:31] will retry after 1.493800744s: waiting for machine to come up
	I1004 01:58:03.520361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520958  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520991  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:03.520911  170429 retry.go:31] will retry after 2.206730553s: waiting for machine to come up
	I1004 01:58:05.729534  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730016  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730050  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:05.729969  170429 retry.go:31] will retry after 3.088350315s: waiting for machine to come up
	I1004 01:58:08.820266  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820743  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820774  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:08.820689  170429 retry.go:31] will retry after 2.773482095s: waiting for machine to come up
	I1004 01:58:11.595977  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596515  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596540  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:11.596475  170429 retry.go:31] will retry after 3.486376696s: waiting for machine to come up
	I1004 01:58:15.084904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.085418  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Found IP for machine: 192.168.61.105
	I1004 01:58:15.085447  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserving static IP address...
	I1004 01:58:15.085460  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has current primary IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.086007  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.086039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserved static IP address: 192.168.61.105
	I1004 01:58:15.086059  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | skip adding static IP to network mk-default-k8s-diff-port-239802 - found existing host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"}
	I1004 01:58:15.086080  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Getting to WaitForSSH function...
	I1004 01:58:15.086098  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for SSH to be available...
	I1004 01:58:15.088134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088506  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.088538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088726  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH client type: external
	I1004 01:58:15.088751  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa (-rw-------)
	I1004 01:58:15.088802  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:58:15.088817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | About to run SSH command:
	I1004 01:58:15.088829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | exit 0
	I1004 01:58:15.226051  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | SSH cmd err, output: <nil>: 
	I1004 01:58:15.226408  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetConfigRaw
	I1004 01:58:15.227055  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.229669  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.230108  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230390  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:58:15.230651  169515 machine.go:88] provisioning docker machine ...
	I1004 01:58:15.230676  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:15.230912  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231113  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:58:15.231134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231297  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.233606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.233990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.234026  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.234134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.234317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234484  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234663  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.234867  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.235199  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.235213  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:58:15.374541  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-239802
	
	I1004 01:58:15.374573  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.377761  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378278  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.378321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378494  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.378705  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378967  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.379135  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.379569  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.379594  169515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-239802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-239802/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-239802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:58:15.520076  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:58:15.520107  169515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:58:15.520129  169515 buildroot.go:174] setting up certificates
	I1004 01:58:15.520141  169515 provision.go:83] configureAuth start
	I1004 01:58:15.520155  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.520502  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.523317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.523814  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.523854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.524058  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.526453  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526752  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.526794  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526920  169515 provision.go:138] copyHostCerts
	I1004 01:58:15.526985  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:58:15.527069  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:58:15.527197  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:58:15.527323  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:58:15.527337  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:58:15.527373  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:58:15.527450  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:58:15.527460  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:58:15.527490  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:58:15.527550  169515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-239802 san=[192.168.61.105 192.168.61.105 localhost 127.0.0.1 minikube default-k8s-diff-port-239802]
	I1004 01:58:15.632152  169515 provision.go:172] copyRemoteCerts
	I1004 01:58:15.632211  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:58:15.632236  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.635344  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.635733  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635886  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.636100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.636262  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.636411  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:15.731442  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1004 01:58:15.755690  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 01:58:15.781135  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:58:15.805779  169515 provision.go:86] duration metric: configureAuth took 285.621049ms
	I1004 01:58:15.805813  169515 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:58:15.806097  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:58:15.806193  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.809186  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.809648  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.810105  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810354  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810577  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.810822  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.811265  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.811283  169515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:58:16.145471  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:58:16.145515  169515 machine.go:91] provisioned docker machine in 914.847777ms
	I1004 01:58:16.145528  169515 start.go:300] post-start starting for "default-k8s-diff-port-239802" (driver="kvm2")
	I1004 01:58:16.145541  169515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:58:16.145564  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.145936  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:58:16.145970  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.148759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149272  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.149306  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149563  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.149803  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.150023  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.150185  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.245579  169515 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:58:16.250364  169515 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:58:16.250394  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:58:16.250472  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:58:16.250566  169515 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:58:16.250821  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:58:16.260991  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:16.283999  169515 start.go:303] post-start completed in 138.45373ms
	I1004 01:58:16.284022  169515 fix.go:56] fixHost completed within 21.901874601s
	I1004 01:58:16.284043  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.286817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287150  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.287174  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287383  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.287598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287848  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.288010  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:16.288381  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:16.288414  169515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:58:16.418775  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696384696.400645117
	
	I1004 01:58:16.418799  169515 fix.go:206] guest clock: 1696384696.400645117
	I1004 01:58:16.418806  169515 fix.go:219] Guest: 2023-10-04 01:58:16.400645117 +0000 UTC Remote: 2023-10-04 01:58:16.284026062 +0000 UTC m=+304.486597710 (delta=116.619055ms)
	I1004 01:58:16.418832  169515 fix.go:190] guest clock delta is within tolerance: 116.619055ms
	I1004 01:58:16.418837  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 22.036713239s
	I1004 01:58:16.418861  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.419152  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:16.421829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422225  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.422265  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.422990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423191  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423288  169515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:58:16.423361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.423400  169515 ssh_runner.go:195] Run: cat /version.json
	I1004 01:58:16.423430  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.426244  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426412  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426666  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426835  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.426903  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426928  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.427049  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427079  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.427257  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427305  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427389  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.427491  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427616  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.541652  169515 ssh_runner.go:195] Run: systemctl --version
	I1004 01:58:16.548207  169515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:58:16.689236  169515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 01:58:16.695609  169515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:58:16.695700  169515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:58:16.711541  169515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:58:16.711569  169515 start.go:469] detecting cgroup driver to use...
	I1004 01:58:16.711648  169515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:58:16.727693  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:58:16.741081  169515 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:58:16.741145  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:58:16.754740  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:58:16.768697  169515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:58:16.892808  169515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:58:17.012129  169515 docker.go:213] disabling docker service ...
	I1004 01:58:17.012203  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:58:17.027872  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:58:17.039804  169515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:58:17.138577  169515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:58:17.242819  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:58:17.255768  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:58:17.273761  169515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:58:17.273824  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.284028  169515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:58:17.284103  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.294763  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.304668  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.314305  169515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:58:17.324280  169515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:58:17.333123  169515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:58:17.333181  169515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:58:17.346921  169515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:58:17.357411  169515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:58:17.466076  169515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:58:17.665370  169515 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:58:17.665446  169515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:58:17.671020  169515 start.go:537] Will wait 60s for crictl version
	I1004 01:58:17.671103  169515 ssh_runner.go:195] Run: which crictl
	I1004 01:58:17.675046  169515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:58:17.711171  169515 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:58:17.711255  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.764684  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.818887  169515 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:58:17.820580  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:17.823598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824003  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:17.824039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824180  169515 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 01:58:17.828529  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:17.842201  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:58:17.842277  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:17.889167  169515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 01:58:17.889260  169515 ssh_runner.go:195] Run: which lz4
	I1004 01:58:17.893479  169515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:58:17.898162  169515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:58:17.898208  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 01:58:19.729377  169515 crio.go:444] Took 1.835934 seconds to copy over tarball
	I1004 01:58:19.729456  169515 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:58:22.593494  169515 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.864005818s)
	I1004 01:58:22.593526  169515 crio.go:451] Took 2.864115 seconds to extract the tarball
	I1004 01:58:22.593541  169515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:58:22.637806  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:22.688382  169515 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:58:22.688411  169515 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:58:22.688492  169515 ssh_runner.go:195] Run: crio config
	I1004 01:58:22.763035  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:22.763056  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:22.763523  169515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:58:22.763558  169515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-239802 NodeName:default-k8s-diff-port-239802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:58:22.763710  169515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-239802"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:58:22.763781  169515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-239802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1004 01:58:22.763836  169515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:58:22.772839  169515 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:58:22.772912  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:58:22.781165  169515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1004 01:58:22.799884  169515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:58:22.817806  169515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1004 01:58:22.836379  169515 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1004 01:58:22.840577  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:22.854009  169515 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802 for IP: 192.168.61.105
	I1004 01:58:22.854051  169515 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:58:22.854225  169515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:58:22.854280  169515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:58:22.854390  169515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/client.key
	I1004 01:58:22.854470  169515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key.c44c9625
	I1004 01:58:22.854525  169515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key
	I1004 01:58:22.854676  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:58:22.854716  169515 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:58:22.854731  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:58:22.854795  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:58:22.854841  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:58:22.854874  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:58:22.854936  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:22.855704  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:58:22.883055  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:58:22.909260  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:58:22.936140  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 01:58:22.963068  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:58:22.990358  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:58:23.019293  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:58:23.046021  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:58:23.072727  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:58:23.099530  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:58:23.125965  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:58:23.152909  169515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:58:23.171043  169515 ssh_runner.go:195] Run: openssl version
	I1004 01:58:23.177062  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:58:23.187693  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192607  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192695  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.198687  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:58:23.208870  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:58:23.220345  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225134  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225205  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.230830  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:58:23.241519  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:58:23.251661  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256671  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256740  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.263041  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:58:23.272914  169515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:58:23.277650  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:58:23.283889  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:58:23.289960  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:58:23.295853  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:58:23.302386  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:58:23.308626  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:58:23.315173  169515 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:58:23.315270  169515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:58:23.315329  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:23.360078  169515 cri.go:89] found id: ""
	I1004 01:58:23.360160  169515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:58:23.370577  169515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1004 01:58:23.370607  169515 kubeadm.go:636] restartCluster start
	I1004 01:58:23.370670  169515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 01:58:23.380554  169515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.382064  169515 kubeconfig.go:92] found "default-k8s-diff-port-239802" server: "https://192.168.61.105:8444"
	I1004 01:58:23.384489  169515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 01:58:23.394552  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.394621  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.406027  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.406050  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.406088  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.416731  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.917459  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.917567  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.929055  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.417118  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.417196  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.429944  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.917530  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.917640  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.928908  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.417526  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.417598  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.429815  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.917482  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.917579  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.928966  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.417583  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.417703  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.429371  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.917165  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.917259  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.929210  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.417701  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.417803  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.429305  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.917024  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.928702  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.417024  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.417142  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.428772  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.917340  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.917439  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.929099  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.417234  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.417333  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.429431  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.916874  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.916967  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.928613  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.417157  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.417247  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.429364  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.917013  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.928682  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.417225  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.417328  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.429087  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.917131  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.917218  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.929475  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.416979  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.417061  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.431474  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.917018  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.917123  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.929083  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:33.394900  169515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1004 01:58:33.394937  169515 kubeadm.go:1128] stopping kube-system containers ...
	I1004 01:58:33.394955  169515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 01:58:33.395025  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:33.439584  169515 cri.go:89] found id: ""
	I1004 01:58:33.439676  169515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 01:58:33.455188  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:58:33.464838  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:58:33.464909  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473594  169515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473622  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:33.606598  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.496399  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.698397  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.778632  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.858383  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:58:34.858475  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:34.871386  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.384197  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.884575  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.383599  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.883552  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.384513  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.409737  169515 api_server.go:72] duration metric: took 2.551352833s to wait for apiserver process to appear ...
	I1004 01:58:37.409768  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:58:37.409791  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410400  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.410464  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410871  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.911616  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.733688  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.733788  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.733802  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.789718  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.789758  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.911398  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.919484  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:41.919510  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.411543  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.417441  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.417474  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.910983  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.918972  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.918999  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:43.411752  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:43.418030  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 01:58:43.429647  169515 api_server.go:141] control plane version: v1.28.2
	I1004 01:58:43.429678  169515 api_server.go:131] duration metric: took 6.019900977s to wait for apiserver health ...
	I1004 01:58:43.429690  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:43.429697  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:43.431972  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:58:43.433484  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:58:43.447694  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:58:43.471374  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:58:43.481660  169515 system_pods.go:59] 8 kube-system pods found
	I1004 01:58:43.481703  169515 system_pods.go:61] "coredns-5dd5756b68-ntmdn" [93a30dd9-0d38-4648-9291-703928437ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 01:58:43.481716  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [387a9b5c-12b7-4be8-ab2a-a05f15640f17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 01:58:43.481725  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [a9900212-1372-410f-b6d9-105f78dfde92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 01:58:43.481735  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [d9684911-65f2-4b81-800a-9d99b277b7e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 01:58:43.481747  169515 system_pods.go:61] "kube-proxy-v9qw4" [6db82ea2-130c-4f40-ae3e-2abe4fdb2860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 01:58:43.481757  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [98b82b29-64c3-4042-bf6b-040b05992648] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 01:58:43.481770  169515 system_pods.go:61] "metrics-server-57f55c9bc5-hxrqk" [94e85ebf-dba5-4975-8167-bc23dc74b5f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:58:43.481789  169515 system_pods.go:61] "storage-provisioner" [11d1866b-ef0b-4b12-a2d3-a38fe68f5184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 01:58:43.481801  169515 system_pods.go:74] duration metric: took 10.402243ms to wait for pod list to return data ...
	I1004 01:58:43.481815  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:58:43.485997  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:58:43.486041  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 01:58:43.486056  169515 node_conditions.go:105] duration metric: took 4.234155ms to run NodePressure ...
	I1004 01:58:43.486078  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:43.740784  169515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749933  169515 kubeadm.go:787] kubelet initialised
	I1004 01:58:43.749956  169515 kubeadm.go:788] duration metric: took 9.146841ms waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749964  169515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:58:43.762449  169515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:45.795545  169515 pod_ready.go:102] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:47.294570  169515 pod_ready.go:92] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:47.294593  169515 pod_ready.go:81] duration metric: took 3.532106169s waiting for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:47.294629  169515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:49.318426  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.320090  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.819783  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.819808  169515 pod_ready.go:81] duration metric: took 4.525169791s waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.819820  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825714  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.825738  169515 pod_ready.go:81] duration metric: took 5.910346ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825750  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345345  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.345375  169515 pod_ready.go:81] duration metric: took 519.614193ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345388  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351098  169515 pod_ready.go:92] pod "kube-proxy-v9qw4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.351115  169515 pod_ready.go:81] duration metric: took 5.721421ms waiting for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351123  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675957  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.675986  169515 pod_ready.go:81] duration metric: took 324.855954ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675999  169515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:54.985434  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:56.986014  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:59.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:01.984178  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:03.986718  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:06.486121  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:08.986286  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:10.988493  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:13.487313  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:15.986463  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:17.987092  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:20.484986  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:22.985012  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:25.486297  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:27.988254  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:30.486124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:32.486163  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:34.986124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:36.986217  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:39.485494  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:41.485638  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:43.987966  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:46.484556  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:48.984057  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:50.984900  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:53.483808  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:55.484765  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:57.485763  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:59.985726  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:02.484831  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:04.985989  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:07.485664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:09.485893  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:11.985932  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:13.986799  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:16.488334  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:18.985949  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:21.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:23.986108  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:26.486381  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:28.984912  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:31.484885  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:33.485511  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:35.485786  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:37.985061  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:40.486400  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:42.985255  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:45.485905  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:47.985646  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:49.988812  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:52.485077  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:54.485567  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:56.486128  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:58.486811  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:00.985292  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:02.985432  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:04.990218  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:07.485695  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:09.485758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:11.985237  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:13.988632  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:16.486921  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:18.986300  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:21.486008  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:23.990988  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:26.486730  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:28.984846  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:30.985403  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:32.985500  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:34.989615  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:37.485216  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:39.985745  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:42.485969  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:44.984000  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:46.984954  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:49.485168  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:51.986705  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:53.987005  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:56.484664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:58.485697  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:00.486876  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:02.986832  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:05.485817  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:07.486977  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:09.984945  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:11.985637  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:13.985859  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:16.484825  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:18.485020  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:20.485388  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:22.486622  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:24.985561  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:27.484794  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:29.986684  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:32.494495  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:34.984951  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:36.985082  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:38.987881  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:41.485453  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:43.486758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:45.983941  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:47.984452  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:50.486243  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:52.676831  169515 pod_ready.go:81] duration metric: took 4m0.000812817s waiting for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	E1004 02:02:52.676871  169515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 02:02:52.676911  169515 pod_ready.go:38] duration metric: took 4m8.926937921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:02:52.676950  169515 kubeadm.go:640] restartCluster took 4m29.306332407s
	W1004 02:02:52.677028  169515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 02:02:52.677066  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 02:03:06.687598  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.010492171s)
	I1004 02:03:06.687683  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:06.702277  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:03:06.711887  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:03:06.721545  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:03:06.721606  169515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:03:06.964165  169515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:03:17.591049  169515 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:03:17.591142  169515 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:03:17.591233  169515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:03:17.591398  169515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:03:17.591561  169515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:03:17.591679  169515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:03:17.593418  169515 out.go:204]   - Generating certificates and keys ...
	I1004 02:03:17.593514  169515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:03:17.593593  169515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:03:17.593716  169515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 02:03:17.593817  169515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 02:03:17.593913  169515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 02:03:17.593964  169515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 02:03:17.594015  169515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 02:03:17.594064  169515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 02:03:17.594137  169515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 02:03:17.594216  169515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 02:03:17.594254  169515 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 02:03:17.594318  169515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:03:17.594374  169515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:03:17.594446  169515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:03:17.594525  169515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:03:17.594596  169515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:03:17.594701  169515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:03:17.594785  169515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:03:17.596492  169515 out.go:204]   - Booting up control plane ...
	I1004 02:03:17.596593  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:03:17.596678  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:03:17.596767  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:03:17.596903  169515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:03:17.597026  169515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:03:17.597087  169515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:03:17.597271  169515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:03:17.597365  169515 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004292 seconds
	I1004 02:03:17.597507  169515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:03:17.597663  169515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:03:17.597752  169515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:03:17.598019  169515 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-239802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:03:17.598091  169515 kubeadm.go:322] [bootstrap-token] Using token: 23w16s.bx0je8b3n2xujqpx
	I1004 02:03:17.599777  169515 out.go:204]   - Configuring RBAC rules ...
	I1004 02:03:17.599892  169515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:03:17.600022  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:03:17.600211  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:03:17.600376  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:03:17.600517  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:03:17.600640  169515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:03:17.600774  169515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:03:17.600836  169515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:03:17.600895  169515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:03:17.600908  169515 kubeadm.go:322] 
	I1004 02:03:17.600957  169515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:03:17.600963  169515 kubeadm.go:322] 
	I1004 02:03:17.601026  169515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:03:17.601032  169515 kubeadm.go:322] 
	I1004 02:03:17.601053  169515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:03:17.601102  169515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:03:17.601157  169515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:03:17.601164  169515 kubeadm.go:322] 
	I1004 02:03:17.601213  169515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:03:17.601226  169515 kubeadm.go:322] 
	I1004 02:03:17.601282  169515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:03:17.601289  169515 kubeadm.go:322] 
	I1004 02:03:17.601369  169515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:03:17.601470  169515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:03:17.601584  169515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:03:17.601594  169515 kubeadm.go:322] 
	I1004 02:03:17.601698  169515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:03:17.601780  169515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:03:17.601791  169515 kubeadm.go:322] 
	I1004 02:03:17.601919  169515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602052  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:03:17.602084  169515 kubeadm.go:322] 	--control-plane 
	I1004 02:03:17.602094  169515 kubeadm.go:322] 
	I1004 02:03:17.602212  169515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:03:17.602221  169515 kubeadm.go:322] 
	I1004 02:03:17.602358  169515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602512  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:03:17.602532  169515 cni.go:84] Creating CNI manager for ""
	I1004 02:03:17.602543  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:03:17.605029  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:03:17.606395  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:03:17.633626  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 02:03:17.708983  169515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:03:17.709074  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.709079  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=default-k8s-diff-port-239802 minikube.k8s.io/updated_at=2023_10_04T02_03_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.817989  169515 ops.go:34] apiserver oom_adj: -16
	I1004 02:03:18.073171  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.187308  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.820889  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.320388  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.820323  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.320333  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.821163  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.320330  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.821019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.321019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.821177  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.321168  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.820299  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.320582  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.820863  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.320469  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.820489  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.321120  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.820999  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.321119  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.820996  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.320295  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.821014  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.320832  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.820960  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.321064  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.472351  169515 kubeadm.go:1081] duration metric: took 12.76333985s to wait for elevateKubeSystemPrivileges.
	I1004 02:03:30.472398  169515 kubeadm.go:406] StartCluster complete in 5m7.157236676s
	I1004 02:03:30.472421  169515 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.472516  169515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:03:30.474474  169515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.474744  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:03:30.474777  169515 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:03:30.474868  169515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474889  169515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474894  169515 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474903  169515 addons.go:240] addon storage-provisioner should already be in state true
	I1004 02:03:30.474906  169515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474929  169515 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474938  169515 addons.go:240] addon metrics-server should already be in state true
	I1004 02:03:30.474973  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474985  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474911  169515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-239802"
	I1004 02:03:30.474998  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475437  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475468  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475439  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475657  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.493623  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I1004 02:03:30.493662  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I1004 02:03:30.493781  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I1004 02:03:30.494163  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494166  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494444  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494788  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494790  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494812  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.494815  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495193  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.495213  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.495555  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495810  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.495842  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.496520  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.496559  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.499305  169515 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.499322  169515 addons.go:240] addon default-storageclass should already be in state true
	I1004 02:03:30.499345  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.499914  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.499942  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.514137  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I1004 02:03:30.514752  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.515464  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.515494  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.515576  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I1004 02:03:30.515848  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.515990  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.516030  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.516461  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.516481  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.516840  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.517034  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.518156  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.518191  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I1004 02:03:30.521584  169515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 02:03:30.518793  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.518847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.522961  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:03:30.522981  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:03:30.523002  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.524589  169515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:03:30.523376  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.524627  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.525081  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.525873  169515 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.525888  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:03:30.525904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.526430  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.526461  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.526677  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.530913  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.531170  169515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-239802" context rescaled to 1 replicas
	I1004 02:03:30.531206  169515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:03:30.532986  169515 out.go:177] * Verifying Kubernetes components...
	I1004 02:03:30.531340  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.531757  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.533318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.533937  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.535094  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:30.535197  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.535227  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.535231  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535394  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535440  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.535914  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.535943  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.536116  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.549570  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I1004 02:03:30.550039  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.550714  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.550744  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.551157  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.551347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.553113  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.553403  169515 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.553418  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:03:30.553433  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.555904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556293  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.556318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.556748  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.556908  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.557059  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.745640  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.772975  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:03:30.772997  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 02:03:30.828675  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.862436  169515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.862505  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:03:30.867582  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:03:30.867606  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:03:30.869762  169515 node_ready.go:49] node "default-k8s-diff-port-239802" has status "Ready":"True"
	I1004 02:03:30.869782  169515 node_ready.go:38] duration metric: took 7.313127ms waiting for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.869791  169515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:30.878259  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:30.953707  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:30.953739  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:03:31.080848  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:31.923980  169515 pod_ready.go:97] error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924020  169515 pod_ready.go:81] duration metric: took 1.045735768s waiting for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	E1004 02:03:31.924034  169515 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924041  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.089720  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.344027143s)
	I1004 02:03:33.089798  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.227266643s)
	I1004 02:03:33.089820  169515 start.go:923] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1004 02:03:33.089826  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089749  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261039922s)
	I1004 02:03:33.089847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.089856  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089872  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090197  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090217  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090228  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090226  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090240  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090292  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090310  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090322  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090333  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090332  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090486  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090501  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090993  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.091015  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.120294  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.120321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.120639  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.120660  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379169  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.298272317s)
	I1004 02:03:33.379231  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379247  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379568  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379585  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379595  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379608  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379884  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.379928  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379952  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379965  169515 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-239802"
	I1004 02:03:33.382638  169515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 02:03:33.384185  169515 addons.go:502] enable addons completed in 2.909411548s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 02:03:33.970600  169515 pod_ready.go:92] pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.970634  169515 pod_ready.go:81] duration metric: took 2.046583312s waiting for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.970649  169515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976833  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.976858  169515 pod_ready.go:81] duration metric: took 6.200437ms waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976870  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.983984  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.984006  169515 pod_ready.go:81] duration metric: took 7.126822ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.984016  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269435  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.269462  169515 pod_ready.go:81] duration metric: took 285.437635ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269476  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667111  169515 pod_ready.go:92] pod "kube-proxy-b5ltp" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.667138  169515 pod_ready.go:81] duration metric: took 397.655055ms waiting for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667147  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068656  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:35.068692  169515 pod_ready.go:81] duration metric: took 401.53728ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068706  169515 pod_ready.go:38] duration metric: took 4.198904278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:35.068731  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:03:35.068800  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:03:35.085104  169515 api_server.go:72] duration metric: took 4.553859804s to wait for apiserver process to appear ...
	I1004 02:03:35.085129  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:03:35.085148  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 02:03:35.093144  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 02:03:35.094563  169515 api_server.go:141] control plane version: v1.28.2
	I1004 02:03:35.094583  169515 api_server.go:131] duration metric: took 9.447369ms to wait for apiserver health ...
	I1004 02:03:35.094591  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:03:35.271828  169515 system_pods.go:59] 8 kube-system pods found
	I1004 02:03:35.271855  169515 system_pods.go:61] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.271862  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.271867  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.271871  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.271875  169515 system_pods.go:61] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.271879  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.271886  169515 system_pods.go:61] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.271891  169515 system_pods.go:61] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.271899  169515 system_pods.go:74] duration metric: took 177.302484ms to wait for pod list to return data ...
	I1004 02:03:35.271906  169515 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:03:35.466915  169515 default_sa.go:45] found service account: "default"
	I1004 02:03:35.466956  169515 default_sa.go:55] duration metric: took 195.042376ms for default service account to be created ...
	I1004 02:03:35.466968  169515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:03:35.669331  169515 system_pods.go:86] 8 kube-system pods found
	I1004 02:03:35.669358  169515 system_pods.go:89] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.669363  169515 system_pods.go:89] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.669368  169515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.669372  169515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.669376  169515 system_pods.go:89] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.669380  169515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.669386  169515 system_pods.go:89] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.669391  169515 system_pods.go:89] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.669397  169515 system_pods.go:126] duration metric: took 202.42259ms to wait for k8s-apps to be running ...
	I1004 02:03:35.669404  169515 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:03:35.669446  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:35.685440  169515 system_svc.go:56] duration metric: took 16.022733ms WaitForService to wait for kubelet.
	I1004 02:03:35.685475  169515 kubeadm.go:581] duration metric: took 5.154237901s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 02:03:35.685502  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:03:35.867523  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 02:03:35.867616  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 02:03:35.867645  169515 node_conditions.go:105] duration metric: took 182.13715ms to run NodePressure ...
	I1004 02:03:35.867672  169515 start.go:228] waiting for startup goroutines ...
	I1004 02:03:35.867711  169515 start.go:233] waiting for cluster config update ...
	I1004 02:03:35.867729  169515 start.go:242] writing updated cluster config ...
	I1004 02:03:35.868000  169515 ssh_runner.go:195] Run: rm -f paused
	I1004 02:03:35.921562  169515 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 02:03:35.924514  169515 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-239802" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:50:43 UTC, ends at Wed 2023-10-04 02:05:00 UTC. --
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.881193211Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dbb6fb42-f83d-42a9-810a-8950468a44b8 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.882390273Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c555649a-b288-489d-8fe5-2cf0fcde0fa6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.883451626Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385099883408189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=c555649a-b288-489d-8fe5-2cf0fcde0fa6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.884208508Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=de83b983-35b5-4d7a-9abd-674c5614909a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.884260411Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=de83b983-35b5-4d7a-9abd-674c5614909a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.884600356Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=de83b983-35b5-4d7a-9abd-674c5614909a name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.892549792Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82477c15-1b89-4731-a5cc-91a06b9e9af7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.892754408Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&PodSandboxMetadata{Name:busybox,Uid:16cc2d74-3565-4360-9899-bd029b8d2c9d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384298841558108,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:51:30.826613096Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-wkrdx,Uid:0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:16963842988372802
76,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:51:30.826625028Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:47ac012e674bd5b04b55f52a28578bd86d50b879591c72db87bdc7cd873f785f,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-mmm7c,Uid:b0660d47-8147-4844-aa22-e8c4b4f40577,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384294926880954,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-mmm7c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0660d47-8147-4844-aa22-e8c4b4f40577,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:51:30.8
26622439Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9ee57ba0-6b8f-48cc-afe0-e946ec97f879,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384291181757064,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-m
inikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-04T01:51:30.826623667Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&PodSandboxMetadata{Name:kube-proxy-shlvt,Uid:2a1c2fe3-4209-406d-8e28-74d5c3148c6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384291177377918,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-4209-406d-8e28-74d5c3148c6d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io
/config.seen: 2023-10-04T01:51:30.826627440Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-273516,Uid:614882f03fc6563cd524e3b9c43687b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384284411824835,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd524e3b9c43687b6,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 614882f03fc6563cd524e3b9c43687b6,kubernetes.io/config.seen: 2023-10-04T01:51:23.811766497Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-273516,Uid:44d8f204be9b0d63cc7d39992bde49cd,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384284405359795,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 44d8f204be9b0d63cc7d39992bde49cd,kubernetes.io/config.seen: 2023-10-04T01:51:23.811765742Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-273516,Uid:77fa706a0ad84a510da5d8d1ad33a325,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384284370792226,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510
da5d8d1ad33a325,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.165:2379,kubernetes.io/config.hash: 77fa706a0ad84a510da5d8d1ad33a325,kubernetes.io/config.seen: 2023-10-04T01:51:23.811760984Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-273516,Uid:d5fddeadc0131ddc8d9e3f74c1e41162,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384284349613500,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.165:8443,kubernetes.io/config.hash: d5fddeadc0131ddc8d9e3f74c1e41162,ku
bernetes.io/config.seen: 2023-10-04T01:51:23.811764664Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=82477c15-1b89-4731-a5cc-91a06b9e9af7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.893502636Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c3603911-b34c-46f0-abd7-416df453cc29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.893611632Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c3603911-b34c-46f0-abd7-416df453cc29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.893798118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-4
209-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd
524e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.
kubernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.
container.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c3603911-b34c-46f0-abd7-416df453cc29 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.933733001Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=396c8134-8917-4465-b742-1441abff2533 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.933820434Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=396c8134-8917-4465-b742-1441abff2533 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.935079033Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4da3f6d-d6bf-4e79-9854-9695761f7af6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.935557436Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385099935543407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=d4da3f6d-d6bf-4e79-9854-9695761f7af6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.936377471Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=02887ba7-1826-4907-bc79-2a7c952ca214 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.936427008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=02887ba7-1826-4907-bc79-2a7c952ca214 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.936609025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=02887ba7-1826-4907-bc79-2a7c952ca214 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.975850236Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=21901615-7d19-400a-9ecf-b0b3a3ac6700 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.975937408Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=21901615-7d19-400a-9ecf-b0b3a3ac6700 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.976909904Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6488e001-5edf-488d-9658-eca0ff0a8377 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.977518098Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385099977499865,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=6488e001-5edf-488d-9658-eca0ff0a8377 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.978188957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=90b3995c-4578-4a61-8597-d0da73fc3c18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.978266256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=90b3995c-4578-4a61-8597-d0da73fc3c18 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:04:59 no-preload-273516 crio[742]: time="2023-10-04 02:04:59.978461134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=90b3995c-4578-4a61-8597-d0da73fc3c18 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	05942e8201b6d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3aa2bdd0ded78       busybox
	e3d59ec2af4e1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   909c342bcc022       coredns-5dd5756b68-wkrdx
	2c2e9a0977a2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   644a85f7f3686       storage-provisioner
	3baef608a9876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   644a85f7f3686       storage-provisioner
	b413622f7c392       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      13 minutes ago      Running             kube-proxy                1                   f15ee68074374       kube-proxy-shlvt
	946ede03885c7       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      13 minutes ago      Running             kube-scheduler            1                   b97bb39dd4ac4       kube-scheduler-no-preload-273516
	6e2ee480fbb80       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   2461a8f5daa2f       etcd-no-preload-273516
	9ebf01da00b61       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      13 minutes ago      Running             kube-apiserver            1                   d8bfac8f3f87c       kube-apiserver-no-preload-273516
	1406d9eca4647       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      13 minutes ago      Running             kube-controller-manager   1                   509ba2ebe4e93       kube-controller-manager-no-preload-273516
	
	* 
	* ==> coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58935 - 27051 "HINFO IN 18115897949314560.2540196831787147618. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022994197s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-273516
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-273516
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=no-preload-273516
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_41_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:41:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-273516
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 02:04:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:02:14 +0000   Wed, 04 Oct 2023 01:41:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:02:14 +0000   Wed, 04 Oct 2023 01:41:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:02:14 +0000   Wed, 04 Oct 2023 01:41:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:02:14 +0000   Wed, 04 Oct 2023 01:51:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.165
	  Hostname:    no-preload-273516
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 85b2abee83814eadafe0451f11a59a64
	  System UUID:                85b2abee-8381-4ead-afe0-451f11a59a64
	  Boot ID:                    cb041762-81b2-4e64-9de0-74cdaa7a20f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-5dd5756b68-wkrdx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-273516                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-273516             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-273516    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-shlvt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-273516             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-mmm7c              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23m (x7 over 23m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x7 over 23m)  kubelet          Node no-preload-273516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-273516 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-273516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-273516 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node no-preload-273516 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-273516 event: Registered Node no-preload-273516 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-273516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-273516 event: Registered Node no-preload-273516 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.078638] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.910100] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.713338] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160693] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.446991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.127295] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.119052] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.179735] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[  +0.149968] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +0.249311] systemd-fstab-generator[727]: Ignoring "noauto" for root device
	[Oct 4 01:51] systemd-fstab-generator[1253]: Ignoring "noauto" for root device
	[ +15.378079] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] <==
	* {"level":"info","ts":"2023-10-04T01:51:27.336435Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"3152c33aadbaa9f5","initial-advertise-peer-urls":["https://192.168.83.165:2380"],"listen-peer-urls":["https://192.168.83.165:2380"],"advertise-client-urls":["https://192.168.83.165:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.83.165:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-04T01:51:27.336491Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-04T01:51:27.336636Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.83.165:2380"}
	{"level":"info","ts":"2023-10-04T01:51:27.336645Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.165:2380"}
	{"level":"info","ts":"2023-10-04T01:51:29.090625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T01:51:29.0907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:51:29.090735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 received MsgPreVoteResp from 3152c33aadbaa9f5 at term 2"}
	{"level":"info","ts":"2023-10-04T01:51:29.090748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.090753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 received MsgVoteResp from 3152c33aadbaa9f5 at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.090762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 became leader at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.090769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3152c33aadbaa9f5 elected leader 3152c33aadbaa9f5 at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.093602Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:51:29.094604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:51:29.104397Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:51:29.105601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.165:2379"}
	{"level":"info","ts":"2023-10-04T01:51:29.093546Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3152c33aadbaa9f5","local-member-attributes":"{Name:no-preload-273516 ClientURLs:[https://192.168.83.165:2379]}","request-path":"/0/members/3152c33aadbaa9f5/attributes","cluster-id":"7aac9845db42f04b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:51:29.112787Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:51:29.112838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-10-04T01:58:23.457422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.443909ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T01:58:23.457706Z","caller":"traceutil/trace.go:171","msg":"trace[1951961987] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:945; }","duration":"203.778645ms","start":"2023-10-04T01:58:23.253891Z","end":"2023-10-04T01:58:23.457669Z","steps":["trace[1951961987] 'range keys from in-memory index tree'  (duration: 203.337463ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:58:25.201468Z","caller":"traceutil/trace.go:171","msg":"trace[1359224245] transaction","detail":"{read_only:false; response_revision:946; number_of_response:1; }","duration":"360.658973ms","start":"2023-10-04T01:58:24.840782Z","end":"2023-10-04T01:58:25.201441Z","steps":["trace[1359224245] 'process raft request'  (duration: 360.424779ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:58:25.202806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:58:24.840765Z","time spent":"361.010971ms","remote":"127.0.0.1:38836","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:945 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-10-04T02:01:29.12962Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":850}
	{"level":"info","ts":"2023-10-04T02:01:29.132936Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":850,"took":"2.94917ms","hash":435336003}
	{"level":"info","ts":"2023-10-04T02:01:29.133061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435336003,"revision":850,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  02:05:00 up 14 min,  0 users,  load average: 0.27, 0.23, 0.18
	Linux no-preload-273516 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] <==
	* I1004 02:01:30.753255       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:01:31.753418       1 handler_proxy.go:93] no RequestInfo found in the context
	W1004 02:01:31.753541       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:01:31.753712       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:01:31.753737       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1004 02:01:31.753640       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:01:31.755841       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:02:30.611860       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:02:31.753905       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:02:31.753942       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:02:31.753961       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:02:31.755986       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:02:31.756210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:02:31.756268       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:03:30.612353       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 02:04:30.611980       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:04:31.754875       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:04:31.754933       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:04:31.754949       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:04:31.757474       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:04:31.757591       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:04:31.757603       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] <==
	* I1004 01:59:13.737001       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 01:59:43.251304       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 01:59:43.747566       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:00:13.258289       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:00:13.756858       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:00:43.264330       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:00:43.765313       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:01:13.270528       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:01:13.773822       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:01:43.281754       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:01:43.783943       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:02:13.286804       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:02:13.794710       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:02:21.913057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="326.364µs"
	I1004 02:02:34.910931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="150.146µs"
	E1004 02:02:43.291940       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:02:43.803678       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:03:13.300488       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:03:13.815263       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:03:43.306686       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:03:43.825504       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:04:13.312061       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:04:13.837858       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:04:43.319300       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:04:43.846597       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] <==
	* I1004 01:51:32.089640       1 server_others.go:69] "Using iptables proxy"
	I1004 01:51:32.099894       1 node.go:141] Successfully retrieved node IP: 192.168.83.165
	I1004 01:51:32.135904       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:51:32.135953       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:51:32.138824       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:51:32.138889       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:51:32.139253       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:51:32.139291       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:51:32.140395       1 config.go:188] "Starting service config controller"
	I1004 01:51:32.140445       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:51:32.140468       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:51:32.140472       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:51:32.143264       1 config.go:315] "Starting node config controller"
	I1004 01:51:32.143301       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:51:32.241259       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:51:32.241280       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:51:32.243833       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] <==
	* I1004 01:51:27.690663       1 serving.go:348] Generated self-signed cert in-memory
	I1004 01:51:30.801824       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1004 01:51:30.802015       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:51:30.824521       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1004 01:51:30.824631       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1004 01:51:30.825156       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 01:51:30.825217       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:51:30.825232       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1004 01:51:30.825237       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1004 01:51:30.830399       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 01:51:30.834011       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 01:51:30.927685       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1004 01:51:30.927825       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:51:30.927733       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:50:43 UTC, ends at Wed 2023-10-04 02:05:00 UTC. --
	Oct 04 02:02:21 no-preload-273516 kubelet[1259]: E1004 02:02:21.894994    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:02:24 no-preload-273516 kubelet[1259]: E1004 02:02:24.023319    1259 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:02:24 no-preload-273516 kubelet[1259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:02:24 no-preload-273516 kubelet[1259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:02:24 no-preload-273516 kubelet[1259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:02:34 no-preload-273516 kubelet[1259]: E1004 02:02:34.893188    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:02:47 no-preload-273516 kubelet[1259]: E1004 02:02:47.893909    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:03:00 no-preload-273516 kubelet[1259]: E1004 02:03:00.893684    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:03:11 no-preload-273516 kubelet[1259]: E1004 02:03:11.896079    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:03:22 no-preload-273516 kubelet[1259]: E1004 02:03:22.894257    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:03:24 no-preload-273516 kubelet[1259]: E1004 02:03:24.023306    1259 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:03:24 no-preload-273516 kubelet[1259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:03:24 no-preload-273516 kubelet[1259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:03:24 no-preload-273516 kubelet[1259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:03:34 no-preload-273516 kubelet[1259]: E1004 02:03:34.893971    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:03:48 no-preload-273516 kubelet[1259]: E1004 02:03:48.893063    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:04:02 no-preload-273516 kubelet[1259]: E1004 02:04:02.893815    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:04:16 no-preload-273516 kubelet[1259]: E1004 02:04:16.893832    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:04:24 no-preload-273516 kubelet[1259]: E1004 02:04:24.021998    1259 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:04:24 no-preload-273516 kubelet[1259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:04:24 no-preload-273516 kubelet[1259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:04:24 no-preload-273516 kubelet[1259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:04:31 no-preload-273516 kubelet[1259]: E1004 02:04:31.894702    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:04:46 no-preload-273516 kubelet[1259]: E1004 02:04:46.893809    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:04:58 no-preload-273516 kubelet[1259]: E1004 02:04:58.893382    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	
	* 
	* ==> storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] <==
	* I1004 01:51:33.111627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:51:33.120908       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:51:33.120980       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 01:51:50.526767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 01:51:50.526949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90e9a149-c8f5-4f3b-b586-6091789b0f8d", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-273516_4abc10b7-cf2e-4544-a65c-baf8f75b67fa became leader
	I1004 01:51:50.527672       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-273516_4abc10b7-cf2e-4544-a65c-baf8f75b67fa!
	I1004 01:51:50.630509       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-273516_4abc10b7-cf2e-4544-a65c-baf8f75b67fa!
	
	* 
	* ==> storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] <==
	* I1004 01:51:32.041028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 01:51:32.055412       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-273516 -n no-preload-273516
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-273516 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mmm7c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-273516 describe pod metrics-server-57f55c9bc5-mmm7c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-273516 describe pod metrics-server-57f55c9bc5-mmm7c: exit status 1 (68.339944ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mmm7c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-273516 describe pod metrics-server-57f55c9bc5-mmm7c: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1004 01:58:15.375803  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 02:00:33.291162  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 02:01:05.194763  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 02:02:28.244831  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 02:03:15.375632  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107182 -n old-k8s-version-107182
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:06:38.298298332 +0000 UTC m=+4990.669329378
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-107182 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-107182 logs -n 25: (1.384819477s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-107182        | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-528457                              | cert-expiration-528457       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-554732 | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | disable-driver-mounts-554732                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-487861             | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-487861                  | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273516                  | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-487861 sudo                              | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-509298                 | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| addons  | enable dashboard -p old-k8s-version-107182             | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:50 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-509298                                  | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-239802  | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC | 04 Oct 23 01:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC |                     |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-239802       | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC | 04 Oct 23 02:03 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:53:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:53:11.828274  169515 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:53:11.828536  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828547  169515 out.go:309] Setting ErrFile to fd 2...
	I1004 01:53:11.828552  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828768  169515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:53:11.829347  169515 out.go:303] Setting JSON to false
	I1004 01:53:11.830376  169515 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9343,"bootTime":1696375049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:53:11.830441  169515 start.go:138] virtualization: kvm guest
	I1004 01:53:11.832711  169515 out.go:177] * [default-k8s-diff-port-239802] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:53:11.834324  169515 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:53:11.835643  169515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:53:11.834361  169515 notify.go:220] Checking for updates...
	I1004 01:53:11.838217  169515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:53:11.839555  169515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:53:11.840846  169515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:53:11.842161  169515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:53:07.280681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:09.778282  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.779681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.843761  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:53:11.844277  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.844360  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.860250  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I1004 01:53:11.860700  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.861256  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.861279  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.861643  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.861866  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.862175  169515 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:53:11.862447  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.862487  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.877262  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1004 01:53:11.877711  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.878333  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.878357  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.878806  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.879014  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.917299  169515 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:53:11.918706  169515 start.go:298] selected driver: kvm2
	I1004 01:53:11.918721  169515 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.918831  169515 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:53:11.919435  169515 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.919506  169515 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:53:11.934986  169515 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:53:11.935329  169515 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:53:11.935365  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:53:11.935379  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:53:11.935399  169515 start_flags.go:321] config:
	{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-23980
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.935580  169515 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.937595  169515 out.go:177] * Starting control plane node default-k8s-diff-port-239802 in cluster default-k8s-diff-port-239802
	I1004 01:53:11.938856  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:53:11.938906  169515 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:53:11.938918  169515 cache.go:57] Caching tarball of preloaded images
	I1004 01:53:11.939005  169515 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:53:11.939019  169515 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:53:11.939123  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:53:11.939343  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:53:11.939424  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 58.221µs
	I1004 01:53:11.939444  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:53:11.939453  169515 fix.go:54] fixHost starting: 
	I1004 01:53:11.939742  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.939789  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.954196  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I1004 01:53:11.954631  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.955177  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.955207  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.955546  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.955732  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.955907  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:53:11.957727  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Running err=<nil>
	W1004 01:53:11.957752  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:53:11.959786  169515 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-239802" VM ...
	I1004 01:53:08.669530  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.168697  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:10.723754  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.223290  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.960962  169515 machine.go:88] provisioning docker machine ...
	I1004 01:53:11.960980  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.961165  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961309  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:53:11.961321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961451  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:53:11.964100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964548  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:49:35 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:53:11.964579  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964700  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:53:11.964891  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965213  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:53:11.965415  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:53:11.965918  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:53:11.965942  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:53:14.858205  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:13.780979  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:16.279971  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.170120  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.170376  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.724119  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:18.223219  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.930132  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:18.779188  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.668906  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:19.669782  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:22.169918  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.724642  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:23.225475  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.010157  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:23.279668  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.778425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.668233  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:26.669315  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.723231  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:28.222973  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:27.082190  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:27.778573  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.779483  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.168734  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:31.169219  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:30.223870  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:32.724030  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.162101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:36.234078  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:32.278768  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:34.279611  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:36.779455  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.669109  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.669923  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.224564  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.723997  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:39.724578  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:38.779567  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:41.278736  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.671432  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:40.168863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.168970  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.223844  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.224215  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.358317  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:43.278799  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.279544  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.169371  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.670033  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.726544  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.222631  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.426196  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:47.282389  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.779291  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.673161  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.170963  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.223796  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.724046  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.506087  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:52.280232  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.778941  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.668512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:55.668997  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:56.223812  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.223985  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:57.578187  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:57.281468  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:59.780369  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.169361  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.171086  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.723767  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.724182  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:03.658082  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:06.730171  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:02.278547  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:04.279504  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:06.779458  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.669174  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.169089  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.224336  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.724614  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:08.780155  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:11.281399  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:09.670536  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.170645  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:10.223678  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.724096  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.810084  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:15.882179  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:13.780199  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.280077  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:14.668216  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.668736  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:15.223755  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:17.223789  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:19.724040  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.780554  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.283185  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.672583  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.169626  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:22.223220  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:24.223653  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.962094  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:25.034104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:23.779529  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:25.785001  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:23.668523  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.170080  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.725426  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:29.224292  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.114102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:28.278824  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.280812  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:28.668973  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.669813  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.724077  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.223673  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.186185  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:32.283313  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.785440  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:33.169511  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:35.170079  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:36.223744  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:38.223824  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.270113  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:37.279625  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:39.779646  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:37.670022  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.170303  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.723833  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.723858  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.723974  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:43.338083  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:42.281698  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.778204  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.779425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.668686  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.671405  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:47.170837  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.418200  167452 pod_ready.go:81] duration metric: took 4m0.000746433s waiting for pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace to be "Ready" ...
	E1004 01:54:46.418242  167452 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:54:46.418266  167452 pod_ready.go:38] duration metric: took 4m6.792871015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:54:46.418310  167452 kubeadm.go:640] restartCluster took 4m30.137827083s
	W1004 01:54:46.418446  167452 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:54:46.418484  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:54:49.418125  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:48.780239  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.284905  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:49.174919  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.675479  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:52.490104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:53.778907  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:55.778958  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:54.169521  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:56.670982  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:58.570115  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:01.642220  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:57.779481  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.782476  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.170012  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:01.670386  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:00.372786  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.954218871s)
	I1004 01:55:00.372881  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:00.387256  167452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:55:00.396756  167452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:55:00.406765  167452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:55:00.406806  167452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 01:55:00.625971  167452 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:55:02.279852  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.281525  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.779641  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.170863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.671473  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:07.722109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:10.794061  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:08.780879  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.283040  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.183572  167452 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 01:55:12.183661  167452 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:55:12.183766  167452 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:55:12.183877  167452 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:55:12.183978  167452 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:55:12.184074  167452 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:55:12.185782  167452 out.go:204]   - Generating certificates and keys ...
	I1004 01:55:12.185896  167452 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:55:12.185952  167452 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:55:12.186040  167452 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:55:12.186118  167452 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:55:12.186210  167452 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:55:12.186309  167452 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:55:12.186400  167452 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:55:12.186483  167452 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:55:12.186608  167452 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:55:12.186728  167452 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:55:12.186790  167452 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:55:12.186869  167452 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:55:12.186944  167452 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:55:12.187022  167452 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:55:12.187094  167452 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:55:12.187174  167452 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:55:12.187302  167452 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:55:12.187369  167452 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:55:12.188941  167452 out.go:204]   - Booting up control plane ...
	I1004 01:55:12.189059  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:55:12.189132  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:55:12.189211  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:55:12.189324  167452 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:55:12.189452  167452 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:55:12.189504  167452 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 01:55:12.189735  167452 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:55:12.189877  167452 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004191 seconds
	I1004 01:55:12.190030  167452 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:55:12.190218  167452 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:55:12.190314  167452 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:55:12.190566  167452 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-509298 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 01:55:12.190670  167452 kubeadm.go:322] [bootstrap-token] Using token: i6ebw8.csx7j4uz10ltteg7
	I1004 01:55:12.192239  167452 out.go:204]   - Configuring RBAC rules ...
	I1004 01:55:12.192387  167452 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:55:12.192462  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 01:55:12.192608  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:55:12.192774  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:55:12.192904  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:55:12.192996  167452 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:55:12.193138  167452 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 01:55:12.193211  167452 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:55:12.193271  167452 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:55:12.193278  167452 kubeadm.go:322] 
	I1004 01:55:12.193325  167452 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:55:12.193332  167452 kubeadm.go:322] 
	I1004 01:55:12.193398  167452 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:55:12.193404  167452 kubeadm.go:322] 
	I1004 01:55:12.193424  167452 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:55:12.193475  167452 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:55:12.193517  167452 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:55:12.193523  167452 kubeadm.go:322] 
	I1004 01:55:12.193565  167452 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 01:55:12.193571  167452 kubeadm.go:322] 
	I1004 01:55:12.193628  167452 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 01:55:12.193638  167452 kubeadm.go:322] 
	I1004 01:55:12.193704  167452 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:55:12.193783  167452 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:55:12.193895  167452 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:55:12.193906  167452 kubeadm.go:322] 
	I1004 01:55:12.194003  167452 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 01:55:12.194073  167452 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:55:12.194080  167452 kubeadm.go:322] 
	I1004 01:55:12.194169  167452 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194254  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:55:12.194273  167452 kubeadm.go:322] 	--control-plane 
	I1004 01:55:12.194279  167452 kubeadm.go:322] 
	I1004 01:55:12.194352  167452 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:55:12.194360  167452 kubeadm.go:322] 
	I1004 01:55:12.194428  167452 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194540  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:55:12.194563  167452 cni.go:84] Creating CNI manager for ""
	I1004 01:55:12.194572  167452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:55:12.196296  167452 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:55:09.172018  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.670011  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.197574  167452 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:55:12.219217  167452 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:55:12.298578  167452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:55:12.298671  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.298685  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=embed-certs-509298 minikube.k8s.io/updated_at=2023_10_04T01_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.379573  167452 ops.go:34] apiserver oom_adj: -16
	I1004 01:55:12.664606  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.821682  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.427770  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.928385  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.428534  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.927827  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.780253  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.286195  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:14.169232  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.669256  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:15.428102  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:15.928404  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.428316  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.928095  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.428581  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.928158  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.428061  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.927815  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.428285  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.927597  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.874102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:19.946137  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:18.779212  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.780120  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:18.671773  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:21.169373  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.428231  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:20.927662  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.427644  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.927803  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.427969  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.928321  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.428088  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.928382  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.427968  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.686625  167452 kubeadm.go:1081] duration metric: took 12.388021854s to wait for elevateKubeSystemPrivileges.
	I1004 01:55:24.686650  167452 kubeadm.go:406] StartCluster complete in 5m8.467148399s
	I1004 01:55:24.686670  167452 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.686772  167452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:55:24.689005  167452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.691164  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:55:24.691505  167452 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:55:24.691524  167452 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:55:24.691609  167452 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-509298"
	I1004 01:55:24.691645  167452 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-509298"
	W1004 01:55:24.691666  167452 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:55:24.691681  167452 addons.go:69] Setting default-storageclass=true in profile "embed-certs-509298"
	I1004 01:55:24.691711  167452 addons.go:69] Setting metrics-server=true in profile "embed-certs-509298"
	I1004 01:55:24.691721  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.691750  167452 addons.go:231] Setting addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:24.691713  167452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-509298"
	W1004 01:55:24.691763  167452 addons.go:240] addon metrics-server should already be in state true
	I1004 01:55:24.692075  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692471  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692522  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692566  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692591  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.710712  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1004 01:55:24.711360  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.711863  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1004 01:55:24.712115  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712145  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.712236  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.712668  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.712925  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712950  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.713327  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.713364  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713391  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.713880  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713918  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.715208  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I1004 01:55:24.715594  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.716155  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.716185  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.716523  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.716732  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.720408  167452 addons.go:231] Setting addon default-storageclass=true in "embed-certs-509298"
	W1004 01:55:24.720590  167452 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:55:24.720630  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.720922  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.720963  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.731384  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1004 01:55:24.732142  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.732918  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.732946  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.733348  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.733666  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I1004 01:55:24.733699  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.734163  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.734711  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.734737  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.735163  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.735400  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.735991  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.738353  167452 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:55:24.740203  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:55:24.740222  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:55:24.737643  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.740244  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.742072  167452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:55:24.743597  167452 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:24.743626  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:55:24.743648  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.744536  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745006  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.745048  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745279  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.745519  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.745719  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.745878  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.748789  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.748842  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I1004 01:55:24.749267  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.749298  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.749354  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.749818  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.749892  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.749978  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.750177  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.750270  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.750325  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.750752  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.750802  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.751018  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.768787  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I1004 01:55:24.769394  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.770412  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.770438  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.770803  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.770982  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.772831  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.773101  167452 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.773120  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:55:24.773138  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.776980  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777337  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.777390  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777623  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.777827  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.778030  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.778218  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.827144  167452 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-509298" context rescaled to 1 replicas
	I1004 01:55:24.827188  167452 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.170 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:55:24.829039  167452 out.go:177] * Verifying Kubernetes components...
	I1004 01:55:24.830422  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:24.912112  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:55:24.912145  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:55:24.941943  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.953635  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:55:24.953669  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:55:24.964038  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:25.010973  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.011004  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:55:25.069236  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.073447  167452 node_ready.go:35] waiting up to 6m0s for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.073533  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:55:26.026178  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:23.280683  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.280934  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.276517  167452 node_ready.go:49] node "embed-certs-509298" has status "Ready":"True"
	I1004 01:55:25.276548  167452 node_ready.go:38] duration metric: took 203.068295ms waiting for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.276561  167452 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:25.459727  167452 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:26.648518  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706528042s)
	I1004 01:55:26.648633  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.648655  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.648984  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649002  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.649012  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.649021  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.649326  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:26.649367  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649378  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.670495  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.670520  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.670831  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.670890  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318331  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.35425456s)
	I1004 01:55:27.318392  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318407  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318442  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.249161738s)
	I1004 01:55:27.318496  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318502  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244935012s)
	I1004 01:55:27.318516  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318526  167452 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 01:55:27.318839  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318886  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318904  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318915  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318934  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318944  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318946  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318966  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318980  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318993  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.319203  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319225  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319232  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319242  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.319257  167452 addons.go:467] Verifying addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:27.319290  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319300  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.321408  167452 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1004 01:55:23.171045  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.171137  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.323360  167452 addons.go:502] enable addons completed in 2.631835233s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1004 01:55:27.504611  167452 pod_ready.go:102] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:28.987732  167452 pod_ready.go:92] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.987757  167452 pod_ready.go:81] duration metric: took 3.527990687s waiting for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.987769  167452 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993933  167452 pod_ready.go:92] pod "etcd-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.993953  167452 pod_ready.go:81] duration metric: took 6.17579ms waiting for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993966  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000725  167452 pod_ready.go:92] pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.000747  167452 pod_ready.go:81] duration metric: took 6.77205ms waiting for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000759  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005757  167452 pod_ready.go:92] pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.005779  167452 pod_ready.go:81] duration metric: took 5.011182ms waiting for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005790  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010519  167452 pod_ready.go:92] pod "kube-proxy-f99th" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.010537  167452 pod_ready.go:81] duration metric: took 4.738537ms waiting for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010548  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383772  167452 pod_ready.go:92] pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.383795  167452 pod_ready.go:81] duration metric: took 373.240101ms waiting for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383803  167452 pod_ready.go:38] duration metric: took 4.107228637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:29.383834  167452 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:29.383882  167452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:29.399227  167452 api_server.go:72] duration metric: took 4.572006648s to wait for apiserver process to appear ...
	I1004 01:55:29.399259  167452 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:29.399279  167452 api_server.go:253] Checking apiserver healthz at https://192.168.50.170:8443/healthz ...
	I1004 01:55:29.405336  167452 api_server.go:279] https://192.168.50.170:8443/healthz returned 200:
	ok
	I1004 01:55:29.406768  167452 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:29.406794  167452 api_server.go:131] duration metric: took 7.526875ms to wait for apiserver health ...
	I1004 01:55:29.406804  167452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:29.586194  167452 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:29.586225  167452 system_pods.go:61] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.586230  167452 system_pods.go:61] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.586236  167452 system_pods.go:61] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.586241  167452 system_pods.go:61] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.586248  167452 system_pods.go:61] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.586253  167452 system_pods.go:61] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.586261  167452 system_pods.go:61] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.586269  167452 system_pods.go:61] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.586276  167452 system_pods.go:74] duration metric: took 179.466307ms to wait for pod list to return data ...
	I1004 01:55:29.586289  167452 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:29.782372  167452 default_sa.go:45] found service account: "default"
	I1004 01:55:29.782395  167452 default_sa.go:55] duration metric: took 196.098004ms for default service account to be created ...
	I1004 01:55:29.782403  167452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:29.988230  167452 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:29.988261  167452 system_pods.go:89] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.988267  167452 system_pods.go:89] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.988271  167452 system_pods.go:89] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.988276  167452 system_pods.go:89] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.988281  167452 system_pods.go:89] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.988285  167452 system_pods.go:89] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.988298  167452 system_pods.go:89] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.988305  167452 system_pods.go:89] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.988313  167452 system_pods.go:126] duration metric: took 205.9045ms to wait for k8s-apps to be running ...
	I1004 01:55:29.988323  167452 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:29.988369  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:30.003487  167452 system_svc.go:56] duration metric: took 15.153598ms WaitForService to wait for kubelet.
	I1004 01:55:30.003513  167452 kubeadm.go:581] duration metric: took 5.176299768s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:30.003534  167452 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:30.184152  167452 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:30.184177  167452 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:30.184186  167452 node_conditions.go:105] duration metric: took 180.648418ms to run NodePressure ...
	I1004 01:55:30.184198  167452 start.go:228] waiting for startup goroutines ...
	I1004 01:55:30.184204  167452 start.go:233] waiting for cluster config update ...
	I1004 01:55:30.184213  167452 start.go:242] writing updated cluster config ...
	I1004 01:55:30.184486  167452 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:30.233803  167452 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:30.235636  167452 out.go:177] * Done! kubectl is now configured to use "embed-certs-509298" cluster and "default" namespace by default
	I1004 01:55:29.098156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:27.779362  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.779502  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:31.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.670021  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.678512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:32.172222  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:35.178103  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:34.279433  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:36.781532  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:34.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:37.170113  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:38.254127  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:39.278584  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.279085  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:39.668721  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.670095  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:44.330119  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:43.780710  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:45.782354  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.472905  166755 pod_ready.go:81] duration metric: took 4m0.000518679s waiting for pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:46.472936  166755 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:46.472946  166755 pod_ready.go:38] duration metric: took 4m5.201194434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:46.472975  166755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:46.473020  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:46.473075  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:46.533201  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:46.533233  166755 cri.go:89] found id: ""
	I1004 01:55:46.533243  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:46.533304  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.538613  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:46.538673  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:46.580801  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:46.580826  166755 cri.go:89] found id: ""
	I1004 01:55:46.580834  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:46.580896  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.586423  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:46.586510  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:46.645487  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:46.645526  166755 cri.go:89] found id: ""
	I1004 01:55:46.645535  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:46.645618  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.650643  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:46.650719  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:46.693457  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:46.693482  166755 cri.go:89] found id: ""
	I1004 01:55:46.693492  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:46.693553  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.698463  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:46.698538  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:46.744251  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:46.744279  166755 cri.go:89] found id: ""
	I1004 01:55:46.744289  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:46.744353  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.749343  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:46.749419  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:46.792717  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:46.792745  166755 cri.go:89] found id: ""
	I1004 01:55:46.792755  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:46.792820  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.797417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:46.797492  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:46.843004  166755 cri.go:89] found id: ""
	I1004 01:55:46.843033  166755 logs.go:284] 0 containers: []
	W1004 01:55:46.843044  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:46.843051  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:46.843114  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:44.169475  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.171848  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:47.402086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:46.883372  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:46.883397  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.883405  166755 cri.go:89] found id: ""
	I1004 01:55:46.883415  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:46.883476  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.888350  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.892981  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:46.893010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.936801  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:46.936829  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:46.983092  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:46.983124  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:46.997604  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:46.997634  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:47.041461  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:47.041500  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:47.098192  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:47.098234  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:47.139982  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:47.140010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:47.184753  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:47.184789  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:47.242417  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:47.242456  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:47.290664  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:47.290696  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:47.332998  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:47.333035  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:47.779448  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:47.779490  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:47.951031  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:47.951067  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.505155  166755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:50.522774  166755 api_server.go:72] duration metric: took 4m16.635946913s to wait for apiserver process to appear ...
	I1004 01:55:50.522804  166755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:50.522848  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:50.522929  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:50.565196  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.565220  166755 cri.go:89] found id: ""
	I1004 01:55:50.565232  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:50.565288  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.569426  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:50.569488  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:50.608113  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:50.608138  166755 cri.go:89] found id: ""
	I1004 01:55:50.608147  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:50.608194  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.612671  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:50.612730  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:50.659777  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.659806  166755 cri.go:89] found id: ""
	I1004 01:55:50.659817  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:50.659888  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.664188  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:50.664260  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:50.709318  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:50.709346  166755 cri.go:89] found id: ""
	I1004 01:55:50.709358  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:50.709422  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.713604  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:50.713674  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:50.757565  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:50.757597  166755 cri.go:89] found id: ""
	I1004 01:55:50.757607  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:50.757666  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.761646  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:50.761711  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:50.802683  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:50.802712  166755 cri.go:89] found id: ""
	I1004 01:55:50.802722  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:50.802785  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.807369  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:50.807443  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:50.849917  166755 cri.go:89] found id: ""
	I1004 01:55:50.849952  166755 logs.go:284] 0 containers: []
	W1004 01:55:50.849965  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:50.849974  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:50.850042  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:50.889329  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:50.889353  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:50.889357  166755 cri.go:89] found id: ""
	I1004 01:55:50.889365  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:50.889489  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.894295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.898319  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:50.898345  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:50.950303  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:50.950339  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.989731  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:50.989767  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:51.036483  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:51.036526  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:51.094053  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:51.094109  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:51.234887  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:51.234922  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:51.283233  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:51.283276  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:51.340569  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:51.340610  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:51.751585  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:51.751629  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:51.765404  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:51.765446  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:51.813579  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:51.813611  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:51.853408  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:51.853458  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:48.670114  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:51.169274  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:53.482075  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:56.554101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:51.899649  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:51.899686  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.447493  166755 api_server.go:253] Checking apiserver healthz at https://192.168.83.165:8443/healthz ...
	I1004 01:55:54.453104  166755 api_server.go:279] https://192.168.83.165:8443/healthz returned 200:
	ok
	I1004 01:55:54.455299  166755 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:54.455327  166755 api_server.go:131] duration metric: took 3.932514868s to wait for apiserver health ...
	I1004 01:55:54.455338  166755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:54.455368  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:54.455431  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:54.501159  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:54.501180  166755 cri.go:89] found id: ""
	I1004 01:55:54.501188  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:54.501250  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.506342  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:54.506418  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:54.548780  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:54.548801  166755 cri.go:89] found id: ""
	I1004 01:55:54.548808  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:54.548863  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.560318  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:54.560397  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:54.606477  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:54.606509  166755 cri.go:89] found id: ""
	I1004 01:55:54.606521  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:54.606581  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.611004  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:54.611069  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:54.657003  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.657031  166755 cri.go:89] found id: ""
	I1004 01:55:54.657041  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:54.657106  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.661386  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:54.661459  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:54.713209  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:54.713237  166755 cri.go:89] found id: ""
	I1004 01:55:54.713246  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:54.713295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.718417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:54.718489  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:54.767945  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:54.767969  166755 cri.go:89] found id: ""
	I1004 01:55:54.767979  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:54.768040  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.772488  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:54.772576  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:54.823905  166755 cri.go:89] found id: ""
	I1004 01:55:54.823935  166755 logs.go:284] 0 containers: []
	W1004 01:55:54.823945  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:54.823954  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:54.824017  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:54.878037  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:54.878069  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:54.878076  166755 cri.go:89] found id: ""
	I1004 01:55:54.878086  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:54.878146  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.883456  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.887685  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:54.887708  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:55.021714  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:55.021761  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:55.066557  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:55.066595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:55.125278  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:55.125336  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:55.170570  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:55.170607  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:55.212833  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:55.212866  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:55.552035  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:55.552080  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:55.601698  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:55.601738  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:55.662745  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:55.662786  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:55.707632  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:55.707665  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:55.746461  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:55.746489  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:55.809111  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:55.809150  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:55.850557  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:55.850595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:53.670067  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:55.670340  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:58.374828  166755 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:58.374864  166755 system_pods.go:61] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.374871  166755 system_pods.go:61] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.374878  166755 system_pods.go:61] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.374885  166755 system_pods.go:61] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.374891  166755 system_pods.go:61] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.374898  166755 system_pods.go:61] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.374909  166755 system_pods.go:61] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.374919  166755 system_pods.go:61] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.374934  166755 system_pods.go:74] duration metric: took 3.919586902s to wait for pod list to return data ...
	I1004 01:55:58.374943  166755 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:58.379203  166755 default_sa.go:45] found service account: "default"
	I1004 01:55:58.379228  166755 default_sa.go:55] duration metric: took 4.271125ms for default service account to be created ...
	I1004 01:55:58.379237  166755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:58.389346  166755 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:58.389369  166755 system_pods.go:89] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.389375  166755 system_pods.go:89] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.389379  166755 system_pods.go:89] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.389384  166755 system_pods.go:89] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.389388  166755 system_pods.go:89] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.389391  166755 system_pods.go:89] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.389399  166755 system_pods.go:89] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.389404  166755 system_pods.go:89] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.389411  166755 system_pods.go:126] duration metric: took 10.168718ms to wait for k8s-apps to be running ...
	I1004 01:55:58.389422  166755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:58.389467  166755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:58.410785  166755 system_svc.go:56] duration metric: took 21.353423ms WaitForService to wait for kubelet.
	I1004 01:55:58.410814  166755 kubeadm.go:581] duration metric: took 4m24.523994722s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:58.410840  166755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:58.414873  166755 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:58.414899  166755 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:58.414913  166755 node_conditions.go:105] duration metric: took 4.067596ms to run NodePressure ...
	I1004 01:55:58.414927  166755 start.go:228] waiting for startup goroutines ...
	I1004 01:55:58.414936  166755 start.go:233] waiting for cluster config update ...
	I1004 01:55:58.414948  166755 start.go:242] writing updated cluster config ...
	I1004 01:55:58.415228  166755 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:58.469095  166755 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:58.470860  166755 out.go:177] * Done! kubectl is now configured to use "no-preload-273516" cluster and "default" namespace by default
	I1004 01:55:57.863028  167496 pod_ready.go:81] duration metric: took 4m0.000377885s waiting for pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:57.863064  167496 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:57.863085  167496 pod_ready.go:38] duration metric: took 4m1.198718353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:57.863115  167496 kubeadm.go:640] restartCluster took 5m18.524534819s
	W1004 01:55:57.863173  167496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:55:57.863207  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:56:02.773154  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.909900495s)
	I1004 01:56:02.773229  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:02.786455  167496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:56:02.796780  167496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:56:02.806618  167496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:56:02.806677  167496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1004 01:56:02.872853  167496 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1004 01:56:02.872972  167496 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:56:03.024967  167496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:56:03.025128  167496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:56:03.025294  167496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:56:03.249926  167496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:56:03.251503  167496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:56:03.259788  167496 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1004 01:56:03.380740  167496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:56:03.382796  167496 out.go:204]   - Generating certificates and keys ...
	I1004 01:56:03.382964  167496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:56:03.383087  167496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:56:03.383195  167496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:56:03.383291  167496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:56:03.383404  167496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:56:03.383494  167496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:56:03.383899  167496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:56:03.384184  167496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:56:03.384678  167496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:56:03.385233  167496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:56:03.385302  167496 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:56:03.385358  167496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:56:03.892124  167496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:56:04.106548  167496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:56:04.323375  167496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:56:04.510112  167496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:56:04.512389  167496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:56:02.634095  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:05.710104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:04.514200  167496 out.go:204]   - Booting up control plane ...
	I1004 01:56:04.514318  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:56:04.523675  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:56:04.534185  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:56:04.535396  167496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:56:04.551484  167496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:56:11.786134  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:14.564099  167496 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.011014 seconds
	I1004 01:56:14.564257  167496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:56:14.578656  167496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:56:15.106513  167496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:56:15.106688  167496 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-107182 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1004 01:56:15.616926  167496 kubeadm.go:322] [bootstrap-token] Using token: ocks1c.c9c0w76e1jxk27wy
	I1004 01:56:15.619692  167496 out.go:204]   - Configuring RBAC rules ...
	I1004 01:56:15.619849  167496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:56:15.627037  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:56:15.631821  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:56:15.635639  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:56:15.641343  167496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:56:15.709440  167496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:56:16.046524  167496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:56:16.046544  167496 kubeadm.go:322] 
	I1004 01:56:16.046605  167496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:56:16.046616  167496 kubeadm.go:322] 
	I1004 01:56:16.046691  167496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:56:16.046698  167496 kubeadm.go:322] 
	I1004 01:56:16.046727  167496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:56:16.046781  167496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:56:16.046877  167496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:56:16.046902  167496 kubeadm.go:322] 
	I1004 01:56:16.046980  167496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:56:16.047101  167496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:56:16.047198  167496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:56:16.047210  167496 kubeadm.go:322] 
	I1004 01:56:16.047316  167496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1004 01:56:16.047429  167496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:56:16.047448  167496 kubeadm.go:322] 
	I1004 01:56:16.047560  167496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.047736  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:56:16.047783  167496 kubeadm.go:322]     --control-plane 	  
	I1004 01:56:16.047790  167496 kubeadm.go:322] 
	I1004 01:56:16.047912  167496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:56:16.047926  167496 kubeadm.go:322] 
	I1004 01:56:16.048006  167496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.048141  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:56:16.048764  167496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:56:16.048792  167496 cni.go:84] Creating CNI manager for ""
	I1004 01:56:16.048803  167496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:56:16.051468  167496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:56:14.858093  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:16.052923  167496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:56:16.062452  167496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:56:16.083093  167496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:56:16.083231  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.083232  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=old-k8s-version-107182 minikube.k8s.io/updated_at=2023_10_04T01_56_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.097641  167496 ops.go:34] apiserver oom_adj: -16
	I1004 01:56:16.345591  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.432507  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:17.021142  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.938186  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:17.521246  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.020458  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.521120  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.020993  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.521313  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.020752  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.520524  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.020817  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.521038  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:22.020893  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.014159  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:22.520834  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.021375  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.521450  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.021541  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.521194  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.021420  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.521388  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.020861  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.520474  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:27.020520  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.094110  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:27.520733  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.020857  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.520471  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.020869  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.520801  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.020670  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.521376  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.021462  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.521133  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.021118  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.139808  167496 kubeadm.go:1081] duration metric: took 16.056644408s to wait for elevateKubeSystemPrivileges.
	I1004 01:56:32.139853  167496 kubeadm.go:406] StartCluster complete in 5m52.878327636s
	I1004 01:56:32.139879  167496 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.139983  167496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:56:32.143255  167496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.143507  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:56:32.143608  167496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:56:32.143692  167496 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143710  167496 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-107182"
	I1004 01:56:32.143708  167496 config.go:182] Loaded profile config "old-k8s-version-107182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1004 01:56:32.143717  167496 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:56:32.143732  167496 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143751  167496 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-107182"
	W1004 01:56:32.143762  167496 addons.go:240] addon metrics-server should already be in state true
	I1004 01:56:32.143777  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143807  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143717  167496 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143830  167496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-107182"
	I1004 01:56:32.144169  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144206  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144216  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144236  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144237  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144317  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.161736  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1004 01:56:32.161739  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I1004 01:56:32.162384  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162494  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162735  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I1004 01:56:32.163007  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163024  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163156  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163168  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163232  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.163731  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163747  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163809  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.163851  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164091  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164163  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.164565  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.164611  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.165506  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.165553  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.168699  167496 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-107182"
	W1004 01:56:32.168721  167496 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:56:32.168751  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.169121  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.169148  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.187125  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I1004 01:56:32.187814  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188164  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I1004 01:56:32.188441  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.188462  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.188705  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188823  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1004 01:56:32.188990  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.189161  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.189340  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189357  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189428  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.189669  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189688  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189750  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190009  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.190037  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190736  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.190776  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.191392  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.193250  167496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:56:32.192019  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.194795  167496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.194811  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:56:32.194833  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196365  167496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:56:32.197757  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:56:32.197778  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:56:32.197798  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196532  167496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-107182" context rescaled to 1 replicas
	I1004 01:56:32.197859  167496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:56:32.199796  167496 out.go:177] * Verifying Kubernetes components...
	I1004 01:56:32.201368  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:32.202167  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202462  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202766  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.202794  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203229  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203304  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.203321  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203485  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203677  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.203744  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.204034  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204104  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204194  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.204755  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.211128  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I1004 01:56:32.211596  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.212134  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.212157  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.212528  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.212740  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.214335  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.214592  167496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.214608  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:56:32.214627  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.217280  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.217751  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.217781  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.218036  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.218202  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.218378  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.218528  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.390605  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.392051  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.434602  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:56:32.434629  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:56:32.469744  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:56:32.469793  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:56:32.488555  167496 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.489370  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:56:32.500794  167496 node_ready.go:49] node "old-k8s-version-107182" has status "Ready":"True"
	I1004 01:56:32.500818  167496 node_ready.go:38] duration metric: took 12.232731ms waiting for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.500828  167496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:32.514535  167496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:32.515832  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:32.515859  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:56:32.582811  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:33.449546  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05890047s)
	I1004 01:56:33.449619  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.449635  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450076  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450100  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450113  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.450115  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.450139  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450431  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450454  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450503  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.468938  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.468964  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.469311  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.469332  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.700534  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308435267s)
	I1004 01:56:33.700563  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.211163368s)
	I1004 01:56:33.700582  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.700596  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.700593  167496 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1004 01:56:33.700975  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.700998  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.701010  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.701012  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701021  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.701273  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701321  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.701330  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823328  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240468144s)
	I1004 01:56:33.823384  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823398  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.823769  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.823805  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823819  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823832  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.824142  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.824164  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.824176  167496 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-107182"
	I1004 01:56:33.825973  167496 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 01:56:33.162156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:33.827977  167496 addons.go:502] enable addons completed in 1.684381662s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 01:56:34.532496  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:37.031254  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:39.242136  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:39.031853  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:41.531371  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:42.314165  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:44.032920  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:44.533712  167496 pod_ready.go:92] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.533740  167496 pod_ready.go:81] duration metric: took 12.019178851s waiting for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.533753  167496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539300  167496 pod_ready.go:92] pod "kube-proxy-8lcf5" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.539327  167496 pod_ready.go:81] duration metric: took 5.564927ms waiting for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539337  167496 pod_ready.go:38] duration metric: took 12.038496722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:44.539360  167496 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:56:44.539419  167496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:56:44.554851  167496 api_server.go:72] duration metric: took 12.356945821s to wait for apiserver process to appear ...
	I1004 01:56:44.554881  167496 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:56:44.554900  167496 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I1004 01:56:44.562352  167496 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I1004 01:56:44.563304  167496 api_server.go:141] control plane version: v1.16.0
	I1004 01:56:44.563333  167496 api_server.go:131] duration metric: took 8.444498ms to wait for apiserver health ...
	I1004 01:56:44.563344  167496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:56:44.567672  167496 system_pods.go:59] 4 kube-system pods found
	I1004 01:56:44.567701  167496 system_pods.go:61] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.567708  167496 system_pods.go:61] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.567719  167496 system_pods.go:61] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.567728  167496 system_pods.go:61] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.567736  167496 system_pods.go:74] duration metric: took 4.384195ms to wait for pod list to return data ...
	I1004 01:56:44.567746  167496 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:56:44.570566  167496 default_sa.go:45] found service account: "default"
	I1004 01:56:44.570597  167496 default_sa.go:55] duration metric: took 2.843182ms for default service account to be created ...
	I1004 01:56:44.570608  167496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:56:44.575497  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.575524  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.575534  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.575543  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.575552  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.575572  167496 retry.go:31] will retry after 201.187376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:44.781105  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.781140  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.781146  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.781155  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.781162  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.781179  167496 retry.go:31] will retry after 304.433498ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.090030  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.090055  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.090061  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.090067  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.090073  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.090088  167496 retry.go:31] will retry after 344.077296ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.439684  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.439712  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.439717  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.439723  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.439729  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.439743  167496 retry.go:31] will retry after 379.883887ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.824813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.824839  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.824844  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.824853  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.824859  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.824873  167496 retry.go:31] will retry after 650.141708ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:46.480447  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:46.480473  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:46.480478  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:46.480486  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:46.480492  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:46.480507  167496 retry.go:31] will retry after 870.616376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:47.356424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:47.356452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:47.356457  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:47.356464  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:47.356470  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:47.356486  167496 retry.go:31] will retry after 972.499927ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:48.394163  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:51.466067  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:48.333234  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:48.333263  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:48.333269  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:48.333276  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:48.333282  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:48.333296  167496 retry.go:31] will retry after 1.071674914s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:49.410813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:49.410843  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:49.410853  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:49.410864  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:49.410873  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:49.410892  167496 retry.go:31] will retry after 1.833649065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:51.251023  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:51.251046  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:51.251052  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:51.251058  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:51.251065  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:51.251080  167496 retry.go:31] will retry after 1.914402614s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:53.170633  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:53.170675  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:53.170684  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:53.170697  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:53.170706  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:53.170727  167496 retry.go:31] will retry after 2.900802753s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:56.077479  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:56.077505  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:56.077510  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:56.077517  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:56.077523  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:56.077539  167496 retry.go:31] will retry after 2.931373296s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:57.546142  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:00.618191  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:59.014602  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:59.014631  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:59.014639  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:59.014650  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:59.014658  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:59.014679  167496 retry.go:31] will retry after 3.641834809s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:06.698118  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:02.662919  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:02.662957  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:02.662962  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:02.662978  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:02.662986  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:02.663000  167496 retry.go:31] will retry after 5.249216721s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:09.770058  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:07.918510  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:07.918540  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:07.918545  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:07.918551  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:07.918558  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:07.918575  167496 retry.go:31] will retry after 5.21551618s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:15.850131  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:13.139424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:13.139452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:13.139461  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:13.139470  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:13.139480  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:13.139499  167496 retry.go:31] will retry after 6.379920631s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:18.922143  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:19.525272  167496 system_pods.go:86] 5 kube-system pods found
	I1004 01:57:19.525311  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:19.525322  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Pending
	I1004 01:57:19.525329  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:19.525340  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:19.525350  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:19.525372  167496 retry.go:31] will retry after 7.200178423s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:25.002152  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:26.734572  167496 system_pods.go:86] 6 kube-system pods found
	I1004 01:57:26.734603  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:26.734610  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:26.734615  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:26.734619  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:26.734626  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:26.734640  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:26.734662  167496 retry.go:31] will retry after 10.892871067s: missing components: etcd, kube-apiserver
	I1004 01:57:28.078109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:34.158104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:37.634963  167496 system_pods.go:86] 8 kube-system pods found
	I1004 01:57:37.634993  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:37.634998  167496 system_pods.go:89] "etcd-old-k8s-version-107182" [18310540-21e4-4225-9ce0-e662fae16ca5] Running
	I1004 01:57:37.635003  167496 system_pods.go:89] "kube-apiserver-old-k8s-version-107182" [7418c38e-cae2-4d96-bb43-6827c37fc3dd] Running
	I1004 01:57:37.635008  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:37.635012  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:37.635015  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:37.635023  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:37.635028  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:37.635035  167496 system_pods.go:126] duration metric: took 53.064420406s to wait for k8s-apps to be running ...
	I1004 01:57:37.635042  167496 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:57:37.635088  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:57:37.654311  167496 system_svc.go:56] duration metric: took 19.259695ms WaitForService to wait for kubelet.
	I1004 01:57:37.654335  167496 kubeadm.go:581] duration metric: took 1m5.456439597s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:57:37.654358  167496 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:57:37.658645  167496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:57:37.658691  167496 node_conditions.go:123] node cpu capacity is 2
	I1004 01:57:37.658730  167496 node_conditions.go:105] duration metric: took 4.365872ms to run NodePressure ...
	I1004 01:57:37.658744  167496 start.go:228] waiting for startup goroutines ...
	I1004 01:57:37.658753  167496 start.go:233] waiting for cluster config update ...
	I1004 01:57:37.658763  167496 start.go:242] writing updated cluster config ...
	I1004 01:57:37.659093  167496 ssh_runner.go:195] Run: rm -f paused
	I1004 01:57:37.707603  167496 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1004 01:57:37.709678  167496 out.go:177] 
	W1004 01:57:37.711433  167496 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1004 01:57:37.713148  167496 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1004 01:57:37.714765  167496 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-107182" cluster and "default" namespace by default
	I1004 01:57:37.226085  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:43.306106  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:46.378086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:49.379613  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:57:49.379686  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:57:49.381326  169515 machine.go:91] provisioned docker machine in 4m37.42034364s
	I1004 01:57:49.381400  169515 fix.go:56] fixHost completed within 4m37.441947276s
	I1004 01:57:49.381413  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 4m37.441976851s
	W1004 01:57:49.381431  169515 start.go:688] error starting host: provision: host is not running
	W1004 01:57:49.381511  169515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1004 01:57:49.381527  169515 start.go:703] Will try again in 5 seconds ...
	I1004 01:57:54.381970  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:57:54.382105  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 82.376µs
	I1004 01:57:54.382139  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:57:54.382148  169515 fix.go:54] fixHost starting: 
	I1004 01:57:54.382415  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:57:54.382441  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:57:54.397922  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I1004 01:57:54.398391  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:57:54.398857  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:57:54.398879  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:57:54.399227  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:57:54.399426  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:57:54.399606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:57:54.401353  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Stopped err=<nil>
	I1004 01:57:54.401379  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	W1004 01:57:54.401556  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:57:54.403451  169515 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-239802" ...
	I1004 01:57:54.404883  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Start
	I1004 01:57:54.405065  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring networks are active...
	I1004 01:57:54.405797  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network default is active
	I1004 01:57:54.406184  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network mk-default-k8s-diff-port-239802 is active
	I1004 01:57:54.406630  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Getting domain xml...
	I1004 01:57:54.407374  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Creating domain...
	I1004 01:57:55.768364  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting to get IP...
	I1004 01:57:55.769252  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769744  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769819  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.769720  170429 retry.go:31] will retry after 205.391459ms: waiting for machine to come up
	I1004 01:57:55.977260  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977696  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977721  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.977651  170429 retry.go:31] will retry after 308.679034ms: waiting for machine to come up
	I1004 01:57:56.288223  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288707  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288740  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.288656  170429 retry.go:31] will retry after 419.166959ms: waiting for machine to come up
	I1004 01:57:56.708911  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709549  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709581  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.709483  170429 retry.go:31] will retry after 402.015435ms: waiting for machine to come up
	I1004 01:57:57.113100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113682  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113735  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.113608  170429 retry.go:31] will retry after 555.795777ms: waiting for machine to come up
	I1004 01:57:57.671427  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672087  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672124  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.671985  170429 retry.go:31] will retry after 891.745334ms: waiting for machine to come up
	I1004 01:57:58.564986  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565501  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565533  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:58.565436  170429 retry.go:31] will retry after 897.272137ms: waiting for machine to come up
	I1004 01:57:59.465110  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465742  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465773  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:59.465695  170429 retry.go:31] will retry after 1.042370898s: waiting for machine to come up
	I1004 01:58:00.509812  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510320  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:00.510296  170429 retry.go:31] will retry after 1.512718285s: waiting for machine to come up
	I1004 01:58:02.024160  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024566  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:02.024502  170429 retry.go:31] will retry after 1.493800744s: waiting for machine to come up
	I1004 01:58:03.520361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520958  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520991  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:03.520911  170429 retry.go:31] will retry after 2.206730553s: waiting for machine to come up
	I1004 01:58:05.729534  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730016  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730050  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:05.729969  170429 retry.go:31] will retry after 3.088350315s: waiting for machine to come up
	I1004 01:58:08.820266  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820743  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820774  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:08.820689  170429 retry.go:31] will retry after 2.773482095s: waiting for machine to come up
	I1004 01:58:11.595977  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596515  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596540  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:11.596475  170429 retry.go:31] will retry after 3.486376696s: waiting for machine to come up
	I1004 01:58:15.084904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.085418  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Found IP for machine: 192.168.61.105
	I1004 01:58:15.085447  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserving static IP address...
	I1004 01:58:15.085460  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has current primary IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.086007  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.086039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserved static IP address: 192.168.61.105
	I1004 01:58:15.086059  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | skip adding static IP to network mk-default-k8s-diff-port-239802 - found existing host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"}
	I1004 01:58:15.086080  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Getting to WaitForSSH function...
	I1004 01:58:15.086098  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for SSH to be available...
	I1004 01:58:15.088134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088506  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.088538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088726  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH client type: external
	I1004 01:58:15.088751  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa (-rw-------)
	I1004 01:58:15.088802  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:58:15.088817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | About to run SSH command:
	I1004 01:58:15.088829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | exit 0
	I1004 01:58:15.226051  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | SSH cmd err, output: <nil>: 
	I1004 01:58:15.226408  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetConfigRaw
	I1004 01:58:15.227055  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.229669  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.230108  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230390  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:58:15.230651  169515 machine.go:88] provisioning docker machine ...
	I1004 01:58:15.230676  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:15.230912  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231113  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:58:15.231134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231297  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.233606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.233990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.234026  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.234134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.234317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234484  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234663  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.234867  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.235199  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.235213  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:58:15.374541  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-239802
	
	I1004 01:58:15.374573  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.377761  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378278  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.378321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378494  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.378705  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378967  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.379135  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.379569  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.379594  169515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-239802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-239802/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-239802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:58:15.520076  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:58:15.520107  169515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:58:15.520129  169515 buildroot.go:174] setting up certificates
	I1004 01:58:15.520141  169515 provision.go:83] configureAuth start
	I1004 01:58:15.520155  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.520502  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.523317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.523814  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.523854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.524058  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.526453  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526752  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.526794  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526920  169515 provision.go:138] copyHostCerts
	I1004 01:58:15.526985  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:58:15.527069  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:58:15.527197  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:58:15.527323  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:58:15.527337  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:58:15.527373  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:58:15.527450  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:58:15.527460  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:58:15.527490  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:58:15.527550  169515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-239802 san=[192.168.61.105 192.168.61.105 localhost 127.0.0.1 minikube default-k8s-diff-port-239802]
	I1004 01:58:15.632152  169515 provision.go:172] copyRemoteCerts
	I1004 01:58:15.632211  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:58:15.632236  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.635344  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.635733  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635886  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.636100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.636262  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.636411  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:15.731442  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1004 01:58:15.755690  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 01:58:15.781135  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:58:15.805779  169515 provision.go:86] duration metric: configureAuth took 285.621049ms
	I1004 01:58:15.805813  169515 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:58:15.806097  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:58:15.806193  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.809186  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.809648  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.810105  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810354  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810577  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.810822  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.811265  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.811283  169515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:58:16.145471  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:58:16.145515  169515 machine.go:91] provisioned docker machine in 914.847777ms
	I1004 01:58:16.145528  169515 start.go:300] post-start starting for "default-k8s-diff-port-239802" (driver="kvm2")
	I1004 01:58:16.145541  169515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:58:16.145564  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.145936  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:58:16.145970  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.148759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149272  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.149306  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149563  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.149803  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.150023  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.150185  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.245579  169515 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:58:16.250364  169515 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:58:16.250394  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:58:16.250472  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:58:16.250566  169515 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:58:16.250821  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:58:16.260991  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:16.283999  169515 start.go:303] post-start completed in 138.45373ms
	I1004 01:58:16.284022  169515 fix.go:56] fixHost completed within 21.901874601s
	I1004 01:58:16.284043  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.286817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287150  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.287174  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287383  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.287598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287848  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.288010  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:16.288381  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:16.288414  169515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:58:16.418775  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696384696.400645117
	
	I1004 01:58:16.418799  169515 fix.go:206] guest clock: 1696384696.400645117
	I1004 01:58:16.418806  169515 fix.go:219] Guest: 2023-10-04 01:58:16.400645117 +0000 UTC Remote: 2023-10-04 01:58:16.284026062 +0000 UTC m=+304.486597710 (delta=116.619055ms)
	I1004 01:58:16.418832  169515 fix.go:190] guest clock delta is within tolerance: 116.619055ms
	I1004 01:58:16.418837  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 22.036713239s
	I1004 01:58:16.418861  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.419152  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:16.421829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422225  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.422265  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.422990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423191  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423288  169515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:58:16.423361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.423400  169515 ssh_runner.go:195] Run: cat /version.json
	I1004 01:58:16.423430  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.426244  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426412  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426666  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426835  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.426903  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426928  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.427049  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427079  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.427257  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427305  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427389  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.427491  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427616  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.541652  169515 ssh_runner.go:195] Run: systemctl --version
	I1004 01:58:16.548207  169515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:58:16.689236  169515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 01:58:16.695609  169515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:58:16.695700  169515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:58:16.711541  169515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:58:16.711569  169515 start.go:469] detecting cgroup driver to use...
	I1004 01:58:16.711648  169515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:58:16.727693  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:58:16.741081  169515 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:58:16.741145  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:58:16.754740  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:58:16.768697  169515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:58:16.892808  169515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:58:17.012129  169515 docker.go:213] disabling docker service ...
	I1004 01:58:17.012203  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:58:17.027872  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:58:17.039804  169515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:58:17.138577  169515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:58:17.242819  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:58:17.255768  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:58:17.273761  169515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:58:17.273824  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.284028  169515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:58:17.284103  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.294763  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.304668  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.314305  169515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:58:17.324280  169515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:58:17.333123  169515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:58:17.333181  169515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:58:17.346921  169515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:58:17.357411  169515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:58:17.466076  169515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:58:17.665370  169515 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:58:17.665446  169515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:58:17.671020  169515 start.go:537] Will wait 60s for crictl version
	I1004 01:58:17.671103  169515 ssh_runner.go:195] Run: which crictl
	I1004 01:58:17.675046  169515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:58:17.711171  169515 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:58:17.711255  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.764684  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.818887  169515 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:58:17.820580  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:17.823598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824003  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:17.824039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824180  169515 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 01:58:17.828529  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:17.842201  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:58:17.842277  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:17.889167  169515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 01:58:17.889260  169515 ssh_runner.go:195] Run: which lz4
	I1004 01:58:17.893479  169515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:58:17.898162  169515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:58:17.898208  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 01:58:19.729377  169515 crio.go:444] Took 1.835934 seconds to copy over tarball
	I1004 01:58:19.729456  169515 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:58:22.593494  169515 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.864005818s)
	I1004 01:58:22.593526  169515 crio.go:451] Took 2.864115 seconds to extract the tarball
	I1004 01:58:22.593541  169515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:58:22.637806  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:22.688382  169515 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:58:22.688411  169515 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:58:22.688492  169515 ssh_runner.go:195] Run: crio config
	I1004 01:58:22.763035  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:22.763056  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:22.763523  169515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:58:22.763558  169515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-239802 NodeName:default-k8s-diff-port-239802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:58:22.763710  169515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-239802"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:58:22.763781  169515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-239802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1004 01:58:22.763836  169515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:58:22.772839  169515 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:58:22.772912  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:58:22.781165  169515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1004 01:58:22.799884  169515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:58:22.817806  169515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1004 01:58:22.836379  169515 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1004 01:58:22.840577  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:22.854009  169515 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802 for IP: 192.168.61.105
	I1004 01:58:22.854051  169515 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:58:22.854225  169515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:58:22.854280  169515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:58:22.854390  169515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/client.key
	I1004 01:58:22.854470  169515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key.c44c9625
	I1004 01:58:22.854525  169515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key
	I1004 01:58:22.854676  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:58:22.854716  169515 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:58:22.854731  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:58:22.854795  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:58:22.854841  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:58:22.854874  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:58:22.854936  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:22.855704  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:58:22.883055  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:58:22.909260  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:58:22.936140  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 01:58:22.963068  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:58:22.990358  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:58:23.019293  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:58:23.046021  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:58:23.072727  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:58:23.099530  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:58:23.125965  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:58:23.152909  169515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:58:23.171043  169515 ssh_runner.go:195] Run: openssl version
	I1004 01:58:23.177062  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:58:23.187693  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192607  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192695  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.198687  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:58:23.208870  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:58:23.220345  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225134  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225205  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.230830  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:58:23.241519  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:58:23.251661  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256671  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256740  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.263041  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:58:23.272914  169515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:58:23.277650  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:58:23.283889  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:58:23.289960  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:58:23.295853  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:58:23.302386  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:58:23.308626  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:58:23.315173  169515 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:58:23.315270  169515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:58:23.315329  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:23.360078  169515 cri.go:89] found id: ""
	I1004 01:58:23.360160  169515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:58:23.370577  169515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1004 01:58:23.370607  169515 kubeadm.go:636] restartCluster start
	I1004 01:58:23.370670  169515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 01:58:23.380554  169515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.382064  169515 kubeconfig.go:92] found "default-k8s-diff-port-239802" server: "https://192.168.61.105:8444"
	I1004 01:58:23.384489  169515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 01:58:23.394552  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.394621  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.406027  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.406050  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.406088  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.416731  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.917459  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.917567  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.929055  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.417118  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.417196  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.429944  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.917530  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.917640  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.928908  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.417526  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.417598  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.429815  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.917482  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.917579  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.928966  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.417583  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.417703  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.429371  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.917165  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.917259  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.929210  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.417701  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.417803  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.429305  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.917024  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.928702  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.417024  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.417142  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.428772  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.917340  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.917439  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.929099  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.417234  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.417333  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.429431  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.916874  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.916967  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.928613  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.417157  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.417247  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.429364  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.917013  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.928682  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.417225  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.417328  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.429087  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.917131  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.917218  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.929475  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.416979  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.417061  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.431474  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.917018  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.917123  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.929083  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:33.394900  169515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1004 01:58:33.394937  169515 kubeadm.go:1128] stopping kube-system containers ...
	I1004 01:58:33.394955  169515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 01:58:33.395025  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:33.439584  169515 cri.go:89] found id: ""
	I1004 01:58:33.439676  169515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 01:58:33.455188  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:58:33.464838  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:58:33.464909  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473594  169515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473622  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:33.606598  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.496399  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.698397  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.778632  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.858383  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:58:34.858475  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:34.871386  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.384197  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.884575  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.383599  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.883552  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.384513  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.409737  169515 api_server.go:72] duration metric: took 2.551352833s to wait for apiserver process to appear ...
	I1004 01:58:37.409768  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:58:37.409791  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410400  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.410464  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410871  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.911616  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.733688  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.733788  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.733802  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.789718  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.789758  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.911398  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.919484  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:41.919510  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.411543  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.417441  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.417474  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.910983  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.918972  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.918999  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:43.411752  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:43.418030  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 01:58:43.429647  169515 api_server.go:141] control plane version: v1.28.2
	I1004 01:58:43.429678  169515 api_server.go:131] duration metric: took 6.019900977s to wait for apiserver health ...
	I1004 01:58:43.429690  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:43.429697  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:43.431972  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:58:43.433484  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:58:43.447694  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:58:43.471374  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:58:43.481660  169515 system_pods.go:59] 8 kube-system pods found
	I1004 01:58:43.481703  169515 system_pods.go:61] "coredns-5dd5756b68-ntmdn" [93a30dd9-0d38-4648-9291-703928437ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 01:58:43.481716  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [387a9b5c-12b7-4be8-ab2a-a05f15640f17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 01:58:43.481725  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [a9900212-1372-410f-b6d9-105f78dfde92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 01:58:43.481735  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [d9684911-65f2-4b81-800a-9d99b277b7e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 01:58:43.481747  169515 system_pods.go:61] "kube-proxy-v9qw4" [6db82ea2-130c-4f40-ae3e-2abe4fdb2860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 01:58:43.481757  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [98b82b29-64c3-4042-bf6b-040b05992648] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 01:58:43.481770  169515 system_pods.go:61] "metrics-server-57f55c9bc5-hxrqk" [94e85ebf-dba5-4975-8167-bc23dc74b5f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:58:43.481789  169515 system_pods.go:61] "storage-provisioner" [11d1866b-ef0b-4b12-a2d3-a38fe68f5184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 01:58:43.481801  169515 system_pods.go:74] duration metric: took 10.402243ms to wait for pod list to return data ...
	I1004 01:58:43.481815  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:58:43.485997  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:58:43.486041  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 01:58:43.486056  169515 node_conditions.go:105] duration metric: took 4.234155ms to run NodePressure ...
	I1004 01:58:43.486078  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:43.740784  169515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749933  169515 kubeadm.go:787] kubelet initialised
	I1004 01:58:43.749956  169515 kubeadm.go:788] duration metric: took 9.146841ms waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749964  169515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:58:43.762449  169515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:45.795545  169515 pod_ready.go:102] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:47.294570  169515 pod_ready.go:92] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:47.294593  169515 pod_ready.go:81] duration metric: took 3.532106169s waiting for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:47.294629  169515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:49.318426  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.320090  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.819783  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.819808  169515 pod_ready.go:81] duration metric: took 4.525169791s waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.819820  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825714  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.825738  169515 pod_ready.go:81] duration metric: took 5.910346ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825750  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345345  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.345375  169515 pod_ready.go:81] duration metric: took 519.614193ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345388  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351098  169515 pod_ready.go:92] pod "kube-proxy-v9qw4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.351115  169515 pod_ready.go:81] duration metric: took 5.721421ms waiting for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351123  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675957  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.675986  169515 pod_ready.go:81] duration metric: took 324.855954ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675999  169515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:54.985434  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:56.986014  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:59.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:01.984178  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:03.986718  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:06.486121  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:08.986286  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:10.988493  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:13.487313  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:15.986463  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:17.987092  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:20.484986  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:22.985012  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:25.486297  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:27.988254  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:30.486124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:32.486163  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:34.986124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:36.986217  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:39.485494  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:41.485638  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:43.987966  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:46.484556  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:48.984057  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:50.984900  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:53.483808  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:55.484765  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:57.485763  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:59.985726  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:02.484831  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:04.985989  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:07.485664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:09.485893  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:11.985932  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:13.986799  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:16.488334  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:18.985949  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:21.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:23.986108  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:26.486381  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:28.984912  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:31.484885  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:33.485511  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:35.485786  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:37.985061  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:40.486400  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:42.985255  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:45.485905  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:47.985646  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:49.988812  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:52.485077  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:54.485567  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:56.486128  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:58.486811  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:00.985292  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:02.985432  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:04.990218  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:07.485695  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:09.485758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:11.985237  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:13.988632  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:16.486921  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:18.986300  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:21.486008  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:23.990988  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:26.486730  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:28.984846  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:30.985403  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:32.985500  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:34.989615  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:37.485216  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:39.985745  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:42.485969  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:44.984000  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:46.984954  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:49.485168  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:51.986705  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:53.987005  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:56.484664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:58.485697  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:00.486876  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:02.986832  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:05.485817  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:07.486977  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:09.984945  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:11.985637  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:13.985859  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:16.484825  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:18.485020  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:20.485388  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:22.486622  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:24.985561  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:27.484794  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:29.986684  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:32.494495  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:34.984951  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:36.985082  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:38.987881  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:41.485453  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:43.486758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:45.983941  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:47.984452  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:50.486243  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:52.676831  169515 pod_ready.go:81] duration metric: took 4m0.000812817s waiting for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	E1004 02:02:52.676871  169515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 02:02:52.676911  169515 pod_ready.go:38] duration metric: took 4m8.926937921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:02:52.676950  169515 kubeadm.go:640] restartCluster took 4m29.306332407s
	W1004 02:02:52.677028  169515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 02:02:52.677066  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 02:03:06.687598  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.010492171s)
	I1004 02:03:06.687683  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:06.702277  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:03:06.711887  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:03:06.721545  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:03:06.721606  169515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:03:06.964165  169515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:03:17.591049  169515 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:03:17.591142  169515 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:03:17.591233  169515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:03:17.591398  169515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:03:17.591561  169515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:03:17.591679  169515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:03:17.593418  169515 out.go:204]   - Generating certificates and keys ...
	I1004 02:03:17.593514  169515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:03:17.593593  169515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:03:17.593716  169515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 02:03:17.593817  169515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 02:03:17.593913  169515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 02:03:17.593964  169515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 02:03:17.594015  169515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 02:03:17.594064  169515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 02:03:17.594137  169515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 02:03:17.594216  169515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 02:03:17.594254  169515 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 02:03:17.594318  169515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:03:17.594374  169515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:03:17.594446  169515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:03:17.594525  169515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:03:17.594596  169515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:03:17.594701  169515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:03:17.594785  169515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:03:17.596492  169515 out.go:204]   - Booting up control plane ...
	I1004 02:03:17.596593  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:03:17.596678  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:03:17.596767  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:03:17.596903  169515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:03:17.597026  169515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:03:17.597087  169515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:03:17.597271  169515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:03:17.597365  169515 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004292 seconds
	I1004 02:03:17.597507  169515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:03:17.597663  169515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:03:17.597752  169515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:03:17.598019  169515 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-239802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:03:17.598091  169515 kubeadm.go:322] [bootstrap-token] Using token: 23w16s.bx0je8b3n2xujqpx
	I1004 02:03:17.599777  169515 out.go:204]   - Configuring RBAC rules ...
	I1004 02:03:17.599892  169515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:03:17.600022  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:03:17.600211  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:03:17.600376  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:03:17.600517  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:03:17.600640  169515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:03:17.600774  169515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:03:17.600836  169515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:03:17.600895  169515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:03:17.600908  169515 kubeadm.go:322] 
	I1004 02:03:17.600957  169515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:03:17.600963  169515 kubeadm.go:322] 
	I1004 02:03:17.601026  169515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:03:17.601032  169515 kubeadm.go:322] 
	I1004 02:03:17.601053  169515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:03:17.601102  169515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:03:17.601157  169515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:03:17.601164  169515 kubeadm.go:322] 
	I1004 02:03:17.601213  169515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:03:17.601226  169515 kubeadm.go:322] 
	I1004 02:03:17.601282  169515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:03:17.601289  169515 kubeadm.go:322] 
	I1004 02:03:17.601369  169515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:03:17.601470  169515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:03:17.601584  169515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:03:17.601594  169515 kubeadm.go:322] 
	I1004 02:03:17.601698  169515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:03:17.601780  169515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:03:17.601791  169515 kubeadm.go:322] 
	I1004 02:03:17.601919  169515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602052  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:03:17.602084  169515 kubeadm.go:322] 	--control-plane 
	I1004 02:03:17.602094  169515 kubeadm.go:322] 
	I1004 02:03:17.602212  169515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:03:17.602221  169515 kubeadm.go:322] 
	I1004 02:03:17.602358  169515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602512  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:03:17.602532  169515 cni.go:84] Creating CNI manager for ""
	I1004 02:03:17.602543  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:03:17.605029  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:03:17.606395  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:03:17.633626  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 02:03:17.708983  169515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:03:17.709074  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.709079  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=default-k8s-diff-port-239802 minikube.k8s.io/updated_at=2023_10_04T02_03_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.817989  169515 ops.go:34] apiserver oom_adj: -16
	I1004 02:03:18.073171  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.187308  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.820889  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.320388  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.820323  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.320333  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.821163  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.320330  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.821019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.321019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.821177  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.321168  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.820299  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.320582  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.820863  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.320469  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.820489  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.321120  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.820999  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.321119  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.820996  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.320295  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.821014  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.320832  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.820960  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.321064  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.472351  169515 kubeadm.go:1081] duration metric: took 12.76333985s to wait for elevateKubeSystemPrivileges.
	I1004 02:03:30.472398  169515 kubeadm.go:406] StartCluster complete in 5m7.157236676s
	I1004 02:03:30.472421  169515 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.472516  169515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:03:30.474474  169515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.474744  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:03:30.474777  169515 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:03:30.474868  169515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474889  169515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474894  169515 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474903  169515 addons.go:240] addon storage-provisioner should already be in state true
	I1004 02:03:30.474906  169515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474929  169515 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474938  169515 addons.go:240] addon metrics-server should already be in state true
	I1004 02:03:30.474973  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474985  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474911  169515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-239802"
	I1004 02:03:30.474998  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475437  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475468  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475439  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475657  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.493623  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I1004 02:03:30.493662  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I1004 02:03:30.493781  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I1004 02:03:30.494163  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494166  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494444  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494788  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494790  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494812  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.494815  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495193  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.495213  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.495555  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495810  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.495842  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.496520  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.496559  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.499305  169515 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.499322  169515 addons.go:240] addon default-storageclass should already be in state true
	I1004 02:03:30.499345  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.499914  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.499942  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.514137  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I1004 02:03:30.514752  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.515464  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.515494  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.515576  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I1004 02:03:30.515848  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.515990  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.516030  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.516461  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.516481  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.516840  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.517034  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.518156  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.518191  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I1004 02:03:30.521584  169515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 02:03:30.518793  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.518847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.522961  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:03:30.522981  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:03:30.523002  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.524589  169515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:03:30.523376  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.524627  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.525081  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.525873  169515 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.525888  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:03:30.525904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.526430  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.526461  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.526677  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.530913  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.531170  169515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-239802" context rescaled to 1 replicas
	I1004 02:03:30.531206  169515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:03:30.532986  169515 out.go:177] * Verifying Kubernetes components...
	I1004 02:03:30.531340  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.531757  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.533318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.533937  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.535094  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:30.535197  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.535227  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.535231  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535394  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535440  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.535914  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.535943  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.536116  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.549570  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I1004 02:03:30.550039  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.550714  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.550744  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.551157  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.551347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.553113  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.553403  169515 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.553418  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:03:30.553433  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.555904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556293  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.556318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.556748  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.556908  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.557059  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.745640  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.772975  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:03:30.772997  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 02:03:30.828675  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.862436  169515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.862505  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:03:30.867582  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:03:30.867606  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:03:30.869762  169515 node_ready.go:49] node "default-k8s-diff-port-239802" has status "Ready":"True"
	I1004 02:03:30.869782  169515 node_ready.go:38] duration metric: took 7.313127ms waiting for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.869791  169515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:30.878259  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:30.953707  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:30.953739  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:03:31.080848  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:31.923980  169515 pod_ready.go:97] error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924020  169515 pod_ready.go:81] duration metric: took 1.045735768s waiting for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	E1004 02:03:31.924034  169515 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924041  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.089720  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.344027143s)
	I1004 02:03:33.089798  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.227266643s)
	I1004 02:03:33.089820  169515 start.go:923] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1004 02:03:33.089826  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089749  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261039922s)
	I1004 02:03:33.089847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.089856  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089872  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090197  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090217  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090228  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090226  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090240  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090292  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090310  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090322  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090333  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090332  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090486  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090501  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090993  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.091015  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.120294  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.120321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.120639  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.120660  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379169  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.298272317s)
	I1004 02:03:33.379231  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379247  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379568  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379585  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379595  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379608  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379884  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.379928  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379952  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379965  169515 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-239802"
	I1004 02:03:33.382638  169515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 02:03:33.384185  169515 addons.go:502] enable addons completed in 2.909411548s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 02:03:33.970600  169515 pod_ready.go:92] pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.970634  169515 pod_ready.go:81] duration metric: took 2.046583312s waiting for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.970649  169515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976833  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.976858  169515 pod_ready.go:81] duration metric: took 6.200437ms waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976870  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.983984  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.984006  169515 pod_ready.go:81] duration metric: took 7.126822ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.984016  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269435  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.269462  169515 pod_ready.go:81] duration metric: took 285.437635ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269476  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667111  169515 pod_ready.go:92] pod "kube-proxy-b5ltp" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.667138  169515 pod_ready.go:81] duration metric: took 397.655055ms waiting for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667147  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068656  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:35.068692  169515 pod_ready.go:81] duration metric: took 401.53728ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068706  169515 pod_ready.go:38] duration metric: took 4.198904278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:35.068731  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:03:35.068800  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:03:35.085104  169515 api_server.go:72] duration metric: took 4.553859804s to wait for apiserver process to appear ...
	I1004 02:03:35.085129  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:03:35.085148  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 02:03:35.093144  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 02:03:35.094563  169515 api_server.go:141] control plane version: v1.28.2
	I1004 02:03:35.094583  169515 api_server.go:131] duration metric: took 9.447369ms to wait for apiserver health ...
	I1004 02:03:35.094591  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:03:35.271828  169515 system_pods.go:59] 8 kube-system pods found
	I1004 02:03:35.271855  169515 system_pods.go:61] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.271862  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.271867  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.271871  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.271875  169515 system_pods.go:61] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.271879  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.271886  169515 system_pods.go:61] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.271891  169515 system_pods.go:61] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.271899  169515 system_pods.go:74] duration metric: took 177.302484ms to wait for pod list to return data ...
	I1004 02:03:35.271906  169515 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:03:35.466915  169515 default_sa.go:45] found service account: "default"
	I1004 02:03:35.466956  169515 default_sa.go:55] duration metric: took 195.042376ms for default service account to be created ...
	I1004 02:03:35.466968  169515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:03:35.669331  169515 system_pods.go:86] 8 kube-system pods found
	I1004 02:03:35.669358  169515 system_pods.go:89] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.669363  169515 system_pods.go:89] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.669368  169515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.669372  169515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.669376  169515 system_pods.go:89] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.669380  169515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.669386  169515 system_pods.go:89] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.669391  169515 system_pods.go:89] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.669397  169515 system_pods.go:126] duration metric: took 202.42259ms to wait for k8s-apps to be running ...
	I1004 02:03:35.669404  169515 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:03:35.669446  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:35.685440  169515 system_svc.go:56] duration metric: took 16.022733ms WaitForService to wait for kubelet.
	I1004 02:03:35.685475  169515 kubeadm.go:581] duration metric: took 5.154237901s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 02:03:35.685502  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:03:35.867523  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 02:03:35.867616  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 02:03:35.867645  169515 node_conditions.go:105] duration metric: took 182.13715ms to run NodePressure ...
	I1004 02:03:35.867672  169515 start.go:228] waiting for startup goroutines ...
	I1004 02:03:35.867711  169515 start.go:233] waiting for cluster config update ...
	I1004 02:03:35.867729  169515 start.go:242] writing updated cluster config ...
	I1004 02:03:35.868000  169515 ssh_runner.go:195] Run: rm -f paused
	I1004 02:03:35.921562  169515 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 02:03:35.924514  169515 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-239802" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:50:21 UTC, ends at Wed 2023-10-04 02:06:39 UTC. --
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.158304032Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0a19b90f-a60c-41e0-b613-d02001fb3c0a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.160159595Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b8042625-4c9b-4883-9038-0433c16aa381 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.160660349Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385199160643720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b8042625-4c9b-4883-9038-0433c16aa381 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.161314233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e8edae6f-bbf5-48ee-926a-cfc9b9fa3d04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.161392030Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e8edae6f-bbf5-48ee-926a-cfc9b9fa3d04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.161653286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e8edae6f-bbf5-48ee-926a-cfc9b9fa3d04 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.165783354Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=49a9b682-d1a4-435b-98df-532782735403 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.166014887Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:8e4449eb369e62e5b191ea3d0fc9765f70fbc0b46e42597f92f327cac5425fd5,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-cl45r,Uid:93297548-dde0-4cd3-b47f-a2a867cca7c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384594897755604,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-cl45r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93297548-dde0-4cd3-b47f-a2a867cca7c4,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:56:34.551867786Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:71715868-9727-4d70-b5b4-5f0199e057
9a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384594048640288,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-04T01:56:33.701722827Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-nbf4s,Uid:8965b384-aa80-4e12-8323-4129cc7b53c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384593617944260,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:56:33.270659191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&PodSandboxMetadata{Name:kube-proxy-8lcf5,Uid:50235cdc-deb8-47a6-974
a-943636afd805,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384591857699018,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235cdc-deb8-47a6-974a-943636afd805,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:56:31.51185778Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-107182,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384564947210719,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-10-04T01:56:04.532599047Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-107182,Uid:dec8256181ff519fcd0206fc263f213f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384564937839195,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dec8256181ff519fcd0206fc263f213f,kubernetes.io/config.seen: 2023-10-04T01:56:04.539665492Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:62d1f4d364c97bd2977cc8b1fa4de
634e36b0780ea50584a667f814a90f209d3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-107182,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384564914743624,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-10-04T01:56:04.534269341Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-107182,Uid:07ab9b7d9d902c2a45d50d3a2fa34072,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384564906360367,Labels:map[string]string{component: kube-apiserver,io.
kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 07ab9b7d9d902c2a45d50d3a2fa34072,kubernetes.io/config.seen: 2023-10-04T01:56:04.5262618Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=49a9b682-d1a4-435b-98df-532782735403 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.167324374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9310fba3-a55b-4551-bf9a-a34599161670 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.167368758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9310fba3-a55b-4551-bf9a-a34599161670 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.167595618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9310fba3-a55b-4551-bf9a-a34599161670 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.208198309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cbeba3f1-52ca-44f5-8347-353b42755cd3 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.208282872Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cbeba3f1-52ca-44f5-8347-353b42755cd3 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.210099256Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=68f1bd47-d6cd-43a0-9a7a-7ae1643709a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.210605470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385199210487040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=68f1bd47-d6cd-43a0-9a7a-7ae1643709a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.211206454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f0e7fc45-3125-4361-8692-2a203e2712f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.211250264Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f0e7fc45-3125-4361-8692-2a203e2712f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.211402401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f0e7fc45-3125-4361-8692-2a203e2712f9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.250609541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=5221e1d8-a0a7-445c-ae76-b6da2e882ace name=/runtime.v1.RuntimeService/Version
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.250680375Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=5221e1d8-a0a7-445c-ae76-b6da2e882ace name=/runtime.v1.RuntimeService/Version
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.251998222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4f67cbf6-06b8-47f7-b26d-e5b9530005bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.252453459Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385199252433741,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4f67cbf6-06b8-47f7-b26d-e5b9530005bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.253131373Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=729e31b9-7e8c-4bb2-8972-918ddb54a875 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.253225084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=729e31b9-7e8c-4bb2-8972-918ddb54a875 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:06:39 old-k8s-version-107182 crio[705]: time="2023-10-04 02:06:39.253403000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=729e31b9-7e8c-4bb2-8972-918ddb54a875 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9d330530b6df7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   c0a0bd64bda5f       storage-provisioner
	13ec581cb5718       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   a6d1ca9ae37e8       coredns-5644d7b6d9-nbf4s
	cf3d049e86396       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   0cf3f87525528       kube-proxy-8lcf5
	a7a399e035861       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   29aea50160d27       etcd-old-k8s-version-107182
	438e23cb6e38e       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   62d1f4d364c97       kube-scheduler-old-k8s-version-107182
	1b2995e232648       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   1fa3b4f49c652       kube-controller-manager-old-k8s-version-107182
	65c2228b5316d       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            0                   b3ac3bc12deb7       kube-apiserver-old-k8s-version-107182
	
	* 
	* ==> coredns [13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa] <==
	* .:53
	2023-10-04T01:56:34.498Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-10-04T01:56:34.498Z [INFO] CoreDNS-1.6.2
	2023-10-04T01:56:34.498Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-10-04T01:56:34.518Z [INFO] 127.0.0.1:50899 - 24978 "HINFO IN 2261754534219632708.6796649709746906629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019711849s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-107182
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-107182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=old-k8s-version-107182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_56_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:06:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:06:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:06:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:06:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.182
	  Hostname:    old-k8s-version-107182
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 7806f00958934b92827665aaf231c2c8
	 System UUID:                7806f009-5893-4b92-8276-65aaf231c2c8
	 Boot ID:                    ba7906a7-94a1-4660-9620-ac43e770ae22
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-nbf4s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-107182                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                kube-apiserver-old-k8s-version-107182             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                kube-controller-manager-old-k8s-version-107182    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m24s
	  kube-system                kube-proxy-8lcf5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-107182             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m16s
	  kube-system                metrics-server-74d5856cc6-cl45r                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-107182     Node old-k8s-version-107182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-107182     Node old-k8s-version-107182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-107182     Node old-k8s-version-107182 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-107182  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.080190] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680109] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.534569] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166313] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.560588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.218454] systemd-fstab-generator[630]: Ignoring "noauto" for root device
	[  +0.130081] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.181684] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.109279] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.274661] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[ +19.754098] systemd-fstab-generator[1011]: Ignoring "noauto" for root device
	[  +0.457158] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 4 01:51] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.094340] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 01:56] systemd-fstab-generator[3174]: Ignoring "noauto" for root device
	[  +0.986926] kauditd_printk_skb: 8 callbacks suppressed
	[ +40.176446] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818] <==
	* 2023-10-04 01:56:06.944804 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-04 01:56:06.945877 I | etcdserver: ff4c26660998c2c8 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-04 01:56:06.946216 I | etcdserver/membership: added member ff4c26660998c2c8 [https://192.168.72.182:2380] to cluster 1c15affd5c0f3dba
	2023-10-04 01:56:06.948060 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-04 01:56:06.948416 I | embed: listening for metrics on http://192.168.72.182:2381
	2023-10-04 01:56:06.948735 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-04 01:56:07.932454 I | raft: ff4c26660998c2c8 is starting a new election at term 1
	2023-10-04 01:56:07.932670 I | raft: ff4c26660998c2c8 became candidate at term 2
	2023-10-04 01:56:07.932712 I | raft: ff4c26660998c2c8 received MsgVoteResp from ff4c26660998c2c8 at term 2
	2023-10-04 01:56:07.932746 I | raft: ff4c26660998c2c8 became leader at term 2
	2023-10-04 01:56:07.932771 I | raft: raft.node: ff4c26660998c2c8 elected leader ff4c26660998c2c8 at term 2
	2023-10-04 01:56:07.933148 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-04 01:56:07.934860 I | etcdserver: published {Name:old-k8s-version-107182 ClientURLs:[https://192.168.72.182:2379]} to cluster 1c15affd5c0f3dba
	2023-10-04 01:56:07.934915 I | embed: ready to serve client requests
	2023-10-04 01:56:07.936437 I | embed: serving client requests on 192.168.72.182:2379
	2023-10-04 01:56:07.940760 I | embed: ready to serve client requests
	2023-10-04 01:56:07.942150 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-04 01:56:07.952916 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-04 01:56:07.953013 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-04 01:56:33.395838 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (102.845874ms) to execute
	2023-10-04 01:56:33.417678 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-nbf4s\" " with result "range_response_count:1 size:1367" took too long (113.021159ms) to execute
	2023-10-04 01:58:24.847894 W | etcdserver: request "header:<ID:14035621038172576277 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.182\" mod_revision:532 > success:<request_put:<key:\"/registry/masterleases/192.168.72.182\" value_size:69 lease:4812249001317800467 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.182\" > >>" with result "size:16" took too long (347.589649ms) to execute
	2023-10-04 01:58:25.233408 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:0 size:5" took too long (225.416279ms) to execute
	2023-10-04 02:06:07.979604 I | mvcc: store.index: compact 663
	2023-10-04 02:06:07.982023 I | mvcc: finished scheduled compaction at 663 (took 1.927459ms)
	
	* 
	* ==> kernel <==
	*  02:06:39 up 16 min,  0 users,  load average: 0.20, 0.25, 0.26
	Linux old-k8s-version-107182 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b] <==
	* I1004 01:59:35.478273       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 01:59:35.478839       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 01:59:35.479055       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 01:59:35.479103       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:01:12.196896       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:01:12.197065       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:01:12.197134       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:01:12.197168       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:02:12.197629       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:02:12.197940       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:02:12.198006       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:02:12.198028       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:04:12.198871       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:04:12.198991       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:04:12.199064       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:04:12.199075       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:06:12.199402       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:06:12.199649       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:06:12.199743       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:06:12.199755       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8] <==
	* W1004 02:00:16.584582       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:00:34.182198       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:00:48.586856       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:01:04.436094       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:01:20.588956       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:01:34.688611       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:01:52.591139       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:02:04.941047       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:02:24.593752       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:02:35.192934       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:02:56.596689       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:03:05.445098       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:03:28.598992       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:03:35.698225       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:04:00.601284       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:04:05.950113       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:04:32.604133       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:04:36.207094       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:05:04.606868       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:05:06.459388       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:05:36.608983       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:05:36.711879       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1004 02:06:06.964139       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:06:08.611213       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:06:37.216159       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35] <==
	* W1004 01:56:33.542599       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1004 01:56:33.576578       1 node.go:135] Successfully retrieved node IP: 192.168.72.182
	I1004 01:56:33.576646       1 server_others.go:149] Using iptables Proxier.
	I1004 01:56:33.600565       1 server.go:529] Version: v1.16.0
	I1004 01:56:33.642977       1 config.go:313] Starting service config controller
	I1004 01:56:33.643180       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1004 01:56:33.663985       1 config.go:131] Starting endpoints config controller
	I1004 01:56:33.664063       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1004 01:56:33.771141       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1004 01:56:33.772460       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12] <==
	* I1004 01:56:11.204389       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1004 01:56:11.204857       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1004 01:56:11.252103       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:56:11.270727       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:56:11.271187       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 01:56:11.271481       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 01:56:11.271671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:56:11.271878       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 01:56:11.273455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:11.273735       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:56:11.273837       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:56:11.275703       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:11.283631       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:56:12.264742       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:56:12.277302       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 01:56:12.277739       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:56:12.282034       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:56:12.282133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 01:56:12.283090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 01:56:12.287979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:12.288075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:56:12.288158       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:12.288210       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:56:12.289839       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:56:31.359736       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:50:21 UTC, ends at Wed 2023-10-04 02:06:39 UTC. --
	Oct 04 02:02:17 old-k8s-version-107182 kubelet[3180]: E1004 02:02:17.277949    3180 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:02:17 old-k8s-version-107182 kubelet[3180]: E1004 02:02:17.278052    3180 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:02:17 old-k8s-version-107182 kubelet[3180]: E1004 02:02:17.278110    3180 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:02:17 old-k8s-version-107182 kubelet[3180]: E1004 02:02:17.278150    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 04 02:02:29 old-k8s-version-107182 kubelet[3180]: E1004 02:02:29.202920    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:02:43 old-k8s-version-107182 kubelet[3180]: E1004 02:02:43.202564    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:02:57 old-k8s-version-107182 kubelet[3180]: E1004 02:02:57.203328    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:03:08 old-k8s-version-107182 kubelet[3180]: E1004 02:03:08.202707    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:03:20 old-k8s-version-107182 kubelet[3180]: E1004 02:03:20.202899    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:03:33 old-k8s-version-107182 kubelet[3180]: E1004 02:03:33.203028    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:03:45 old-k8s-version-107182 kubelet[3180]: E1004 02:03:45.203713    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:04:00 old-k8s-version-107182 kubelet[3180]: E1004 02:04:00.204420    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:04:14 old-k8s-version-107182 kubelet[3180]: E1004 02:04:14.202819    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:04:25 old-k8s-version-107182 kubelet[3180]: E1004 02:04:25.202156    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:04:37 old-k8s-version-107182 kubelet[3180]: E1004 02:04:37.202173    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:04:51 old-k8s-version-107182 kubelet[3180]: E1004 02:04:51.202369    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:03 old-k8s-version-107182 kubelet[3180]: E1004 02:05:03.202608    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:16 old-k8s-version-107182 kubelet[3180]: E1004 02:05:16.202612    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:28 old-k8s-version-107182 kubelet[3180]: E1004 02:05:28.202273    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:43 old-k8s-version-107182 kubelet[3180]: E1004 02:05:43.202988    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:57 old-k8s-version-107182 kubelet[3180]: E1004 02:05:57.202173    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:04 old-k8s-version-107182 kubelet[3180]: E1004 02:06:04.284308    3180 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 04 02:06:11 old-k8s-version-107182 kubelet[3180]: E1004 02:06:11.202626    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:24 old-k8s-version-107182 kubelet[3180]: E1004 02:06:24.202346    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:38 old-k8s-version-107182 kubelet[3180]: E1004 02:06:38.202151    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2] <==
	* I1004 01:56:34.832446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:56:34.846127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:56:34.846215       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 01:56:34.857939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 01:56:34.858123       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-107182_f09f8e1e-3490-4d20-ae99-2574c1050795!
	I1004 01:56:34.861406       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1aefe08-19f6-4f35-bb5e-129713d0fae4", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-107182_f09f8e1e-3490-4d20-ae99-2574c1050795 became leader
	I1004 01:56:34.958464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-107182_f09f8e1e-3490-4d20-ae99-2574c1050795!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107182 -n old-k8s-version-107182
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-107182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-cl45r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-107182 describe pod metrics-server-74d5856cc6-cl45r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-107182 describe pod metrics-server-74d5856cc6-cl45r: exit status 1 (68.131529ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-cl45r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-107182 describe pod metrics-server-74d5856cc6-cl45r: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:12:36.483835692 +0000 UTC m=+5348.854866730
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-239802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-239802 logs -n 25: (1.429633959s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo docker                        | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo cat                           | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:12 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:12 UTC | 04 Oct 23 02:12 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo                               | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:12 UTC | 04 Oct 23 02:12 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo find                          | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:12 UTC | 04 Oct 23 02:12 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-171116 sudo crio                          | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:12 UTC | 04 Oct 23 02:12 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-171116                                    | kindnet-171116            | jenkins | v1.31.2 | 04 Oct 23 02:12 UTC | 04 Oct 23 02:12 UTC |
	| start   | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:12 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 02:12:02
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:12:02.589420  177044 out.go:296] Setting OutFile to fd 1 ...
	I1004 02:12:02.589684  177044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:12:02.589692  177044 out.go:309] Setting ErrFile to fd 2...
	I1004 02:12:02.589697  177044 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:12:02.589905  177044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 02:12:02.590517  177044 out.go:303] Setting JSON to false
	I1004 02:12:02.591615  177044 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10474,"bootTime":1696375049,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:12:02.591679  177044 start.go:138] virtualization: kvm guest
	I1004 02:12:02.594005  177044 out.go:177] * [enable-default-cni-171116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:12:02.595662  177044 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 02:12:02.597078  177044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:12:02.595702  177044 notify.go:220] Checking for updates...
	I1004 02:12:02.599921  177044 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:12:02.601397  177044 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:12:02.602764  177044 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 02:12:02.603958  177044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:12:02.605780  177044 config.go:182] Loaded profile config "calico-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:12:02.605970  177044 config.go:182] Loaded profile config "custom-flannel-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:12:02.606108  177044 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:12:02.606252  177044 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 02:12:02.646049  177044 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 02:12:02.647444  177044 start.go:298] selected driver: kvm2
	I1004 02:12:02.647465  177044 start.go:902] validating driver "kvm2" against <nil>
	I1004 02:12:02.647477  177044 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:12:02.648186  177044 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:12:02.648281  177044 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:12:02.665621  177044 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 02:12:02.665696  177044 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	E1004 02:12:02.665896  177044 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1004 02:12:02.665920  177044 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:12:02.665958  177044 cni.go:84] Creating CNI manager for "bridge"
	I1004 02:12:02.665967  177044 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:12:02.665979  177044 start_flags.go:321] config:
	{Name:enable-default-cni-171116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:12:02.666100  177044 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:12:02.668273  177044 out.go:177] * Starting control plane node enable-default-cni-171116 in cluster enable-default-cni-171116
	I1004 02:11:58.383154  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:11:58.383701  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | unable to find current IP address of domain custom-flannel-171116 in network mk-custom-flannel-171116
	I1004 02:11:58.383731  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | I1004 02:11:58.383648  175645 retry.go:31] will retry after 2.551649855s: waiting for machine to come up
	I1004 02:12:00.937961  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:00.937999  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | unable to find current IP address of domain custom-flannel-171116 in network mk-custom-flannel-171116
	I1004 02:12:00.938018  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | I1004 02:12:00.937481  175645 retry.go:31] will retry after 3.566778249s: waiting for machine to come up
	I1004 02:12:06.241332  174058 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002759 seconds
	I1004 02:12:06.241427  174058 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:12:06.267805  174058 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:12:06.794635  174058 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:12:06.794867  174058 kubeadm.go:322] [mark-control-plane] Marking the node calico-171116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:12:07.314807  174058 kubeadm.go:322] [bootstrap-token] Using token: q2x5g1.tie3v54tnx4kmc86
	I1004 02:12:02.669800  177044 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:12:02.669894  177044 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 02:12:02.669911  177044 cache.go:57] Caching tarball of preloaded images
	I1004 02:12:02.669989  177044 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 02:12:02.670004  177044 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 02:12:02.670106  177044 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/enable-default-cni-171116/config.json ...
	I1004 02:12:02.670126  177044 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/enable-default-cni-171116/config.json: {Name:mk12e2b11b914beace62dd95db0530dd3fe9f887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:02.670303  177044 start.go:365] acquiring machines lock for enable-default-cni-171116: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 02:12:04.507263  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:04.507703  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | unable to find current IP address of domain custom-flannel-171116 in network mk-custom-flannel-171116
	I1004 02:12:04.507722  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | I1004 02:12:04.507676  175645 retry.go:31] will retry after 5.290624653s: waiting for machine to come up
	I1004 02:12:07.317742  174058 out.go:204]   - Configuring RBAC rules ...
	I1004 02:12:07.317955  174058 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:12:07.327528  174058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:12:07.343636  174058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:12:07.348263  174058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:12:07.352728  174058 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:12:07.357242  174058 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:12:07.373833  174058 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:12:07.655539  174058 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:12:07.737316  174058 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:12:07.738378  174058 kubeadm.go:322] 
	I1004 02:12:07.738492  174058 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:12:07.738516  174058 kubeadm.go:322] 
	I1004 02:12:07.738619  174058 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:12:07.738639  174058 kubeadm.go:322] 
	I1004 02:12:07.738677  174058 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:12:07.738762  174058 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:12:07.738837  174058 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:12:07.738846  174058 kubeadm.go:322] 
	I1004 02:12:07.738912  174058 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:12:07.738921  174058 kubeadm.go:322] 
	I1004 02:12:07.738988  174058 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:12:07.738995  174058 kubeadm.go:322] 
	I1004 02:12:07.739066  174058 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:12:07.739147  174058 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:12:07.739210  174058 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:12:07.739217  174058 kubeadm.go:322] 
	I1004 02:12:07.739294  174058 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:12:07.739393  174058 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:12:07.739399  174058 kubeadm.go:322] 
	I1004 02:12:07.739492  174058 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token q2x5g1.tie3v54tnx4kmc86 \
	I1004 02:12:07.739582  174058 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:12:07.739603  174058 kubeadm.go:322] 	--control-plane 
	I1004 02:12:07.739609  174058 kubeadm.go:322] 
	I1004 02:12:07.739718  174058 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:12:07.739734  174058 kubeadm.go:322] 
	I1004 02:12:07.739838  174058 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token q2x5g1.tie3v54tnx4kmc86 \
	I1004 02:12:07.739981  174058 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:12:07.741475  174058 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:12:07.741504  174058 cni.go:84] Creating CNI manager for "calico"
	I1004 02:12:07.743394  174058 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1004 02:12:07.744999  174058 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 02:12:07.745025  174058 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (244810 bytes)
	I1004 02:12:07.783852  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 02:12:11.627018  177044 start.go:369] acquired machines lock for "enable-default-cni-171116" in 8.956659403s
	I1004 02:12:11.627088  177044 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.2 ClusterName:enable-default-cni-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:12:11.627236  177044 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 02:12:11.629744  177044 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1004 02:12:11.629976  177044 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:12:11.630034  177044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:12:11.647740  177044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1004 02:12:11.648145  177044 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:12:11.648718  177044 main.go:141] libmachine: Using API Version  1
	I1004 02:12:11.648743  177044 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:12:11.649106  177044 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:12:11.649296  177044 main.go:141] libmachine: (enable-default-cni-171116) Calling .GetMachineName
	I1004 02:12:11.649465  177044 main.go:141] libmachine: (enable-default-cni-171116) Calling .DriverName
	I1004 02:12:11.649661  177044 start.go:159] libmachine.API.Create for "enable-default-cni-171116" (driver="kvm2")
	I1004 02:12:11.649697  177044 client.go:168] LocalClient.Create starting
	I1004 02:12:11.649734  177044 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 02:12:11.649773  177044 main.go:141] libmachine: Decoding PEM data...
	I1004 02:12:11.649796  177044 main.go:141] libmachine: Parsing certificate...
	I1004 02:12:11.649887  177044 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 02:12:11.649917  177044 main.go:141] libmachine: Decoding PEM data...
	I1004 02:12:11.649935  177044 main.go:141] libmachine: Parsing certificate...
	I1004 02:12:11.649962  177044 main.go:141] libmachine: Running pre-create checks...
	I1004 02:12:11.649976  177044 main.go:141] libmachine: (enable-default-cni-171116) Calling .PreCreateCheck
	I1004 02:12:11.650306  177044 main.go:141] libmachine: (enable-default-cni-171116) Calling .GetConfigRaw
	I1004 02:12:11.650772  177044 main.go:141] libmachine: Creating machine...
	I1004 02:12:11.650791  177044 main.go:141] libmachine: (enable-default-cni-171116) Calling .Create
	I1004 02:12:11.650923  177044 main.go:141] libmachine: (enable-default-cni-171116) Creating KVM machine...
	I1004 02:12:11.652201  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | found existing default KVM network
	I1004 02:12:11.653701  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:11.653568  177121 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:bb:21} reservation:<nil>}
	I1004 02:12:11.655114  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:11.655028  177121 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:eb:3d:47} reservation:<nil>}
	I1004 02:12:11.656470  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:11.656366  177121 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr5 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:b0:63} reservation:<nil>}
	I1004 02:12:11.657703  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:11.657613  177121 network.go:214] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:50:44:df} reservation:<nil>}
	I1004 02:12:11.660252  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:11.660159  177121 network.go:209] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029350}
	I1004 02:12:11.665780  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | trying to create private KVM network mk-enable-default-cni-171116 192.168.83.0/24...
	I1004 02:12:11.751505  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | private KVM network mk-enable-default-cni-171116 192.168.83.0/24 created
	I1004 02:12:11.751546  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:11.751441  177121 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:12:11.751564  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116 ...
	I1004 02:12:11.751603  177044 main.go:141] libmachine: (enable-default-cni-171116) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 02:12:11.751648  177044 main.go:141] libmachine: (enable-default-cni-171116) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 02:12:12.001689  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:12.001523  177121 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116/id_rsa...
	I1004 02:12:12.386584  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:12.386262  177121 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116/enable-default-cni-171116.rawdisk...
	I1004 02:12:12.386677  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Writing magic tar header
	I1004 02:12:12.386721  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Writing SSH key tar header
	I1004 02:12:12.386926  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:12.386856  177121 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116 ...
	I1004 02:12:12.387012  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116
	I1004 02:12:12.387041  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116 (perms=drwx------)
	I1004 02:12:12.387056  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 02:12:12.387124  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 02:12:12.387159  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:12:12.387175  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 02:12:12.387190  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 02:12:12.387211  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 02:12:12.387225  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home/jenkins
	I1004 02:12:12.387239  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Checking permissions on dir: /home
	I1004 02:12:12.387252  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 02:12:12.387267  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | Skipping /home - not owner
	I1004 02:12:12.387280  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 02:12:12.387300  177044 main.go:141] libmachine: (enable-default-cni-171116) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 02:12:12.387310  177044 main.go:141] libmachine: (enable-default-cni-171116) Creating domain...
	I1004 02:12:12.388617  177044 main.go:141] libmachine: (enable-default-cni-171116) define libvirt domain using xml: 
	I1004 02:12:12.388645  177044 main.go:141] libmachine: (enable-default-cni-171116) <domain type='kvm'>
	I1004 02:12:12.388659  177044 main.go:141] libmachine: (enable-default-cni-171116)   <name>enable-default-cni-171116</name>
	I1004 02:12:12.388677  177044 main.go:141] libmachine: (enable-default-cni-171116)   <memory unit='MiB'>3072</memory>
	I1004 02:12:12.388718  177044 main.go:141] libmachine: (enable-default-cni-171116)   <vcpu>2</vcpu>
	I1004 02:12:12.388740  177044 main.go:141] libmachine: (enable-default-cni-171116)   <features>
	I1004 02:12:12.388750  177044 main.go:141] libmachine: (enable-default-cni-171116)     <acpi/>
	I1004 02:12:12.388764  177044 main.go:141] libmachine: (enable-default-cni-171116)     <apic/>
	I1004 02:12:12.388773  177044 main.go:141] libmachine: (enable-default-cni-171116)     <pae/>
	I1004 02:12:12.388782  177044 main.go:141] libmachine: (enable-default-cni-171116)     
	I1004 02:12:12.388795  177044 main.go:141] libmachine: (enable-default-cni-171116)   </features>
	I1004 02:12:12.388810  177044 main.go:141] libmachine: (enable-default-cni-171116)   <cpu mode='host-passthrough'>
	I1004 02:12:12.388823  177044 main.go:141] libmachine: (enable-default-cni-171116)   
	I1004 02:12:12.388835  177044 main.go:141] libmachine: (enable-default-cni-171116)   </cpu>
	I1004 02:12:12.388867  177044 main.go:141] libmachine: (enable-default-cni-171116)   <os>
	I1004 02:12:12.388887  177044 main.go:141] libmachine: (enable-default-cni-171116)     <type>hvm</type>
	I1004 02:12:12.388904  177044 main.go:141] libmachine: (enable-default-cni-171116)     <boot dev='cdrom'/>
	I1004 02:12:12.388917  177044 main.go:141] libmachine: (enable-default-cni-171116)     <boot dev='hd'/>
	I1004 02:12:12.388933  177044 main.go:141] libmachine: (enable-default-cni-171116)     <bootmenu enable='no'/>
	I1004 02:12:12.388964  177044 main.go:141] libmachine: (enable-default-cni-171116)   </os>
	I1004 02:12:12.388979  177044 main.go:141] libmachine: (enable-default-cni-171116)   <devices>
	I1004 02:12:12.388993  177044 main.go:141] libmachine: (enable-default-cni-171116)     <disk type='file' device='cdrom'>
	I1004 02:12:12.389014  177044 main.go:141] libmachine: (enable-default-cni-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116/boot2docker.iso'/>
	I1004 02:12:12.389028  177044 main.go:141] libmachine: (enable-default-cni-171116)       <target dev='hdc' bus='scsi'/>
	I1004 02:12:12.389043  177044 main.go:141] libmachine: (enable-default-cni-171116)       <readonly/>
	I1004 02:12:12.389057  177044 main.go:141] libmachine: (enable-default-cni-171116)     </disk>
	I1004 02:12:12.389073  177044 main.go:141] libmachine: (enable-default-cni-171116)     <disk type='file' device='disk'>
	I1004 02:12:12.389089  177044 main.go:141] libmachine: (enable-default-cni-171116)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 02:12:12.389111  177044 main.go:141] libmachine: (enable-default-cni-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/enable-default-cni-171116/enable-default-cni-171116.rawdisk'/>
	I1004 02:12:12.389126  177044 main.go:141] libmachine: (enable-default-cni-171116)       <target dev='hda' bus='virtio'/>
	I1004 02:12:12.389145  177044 main.go:141] libmachine: (enable-default-cni-171116)     </disk>
	I1004 02:12:12.389161  177044 main.go:141] libmachine: (enable-default-cni-171116)     <interface type='network'>
	I1004 02:12:12.389174  177044 main.go:141] libmachine: (enable-default-cni-171116)       <source network='mk-enable-default-cni-171116'/>
	I1004 02:12:12.389183  177044 main.go:141] libmachine: (enable-default-cni-171116)       <model type='virtio'/>
	I1004 02:12:12.389194  177044 main.go:141] libmachine: (enable-default-cni-171116)     </interface>
	I1004 02:12:12.389211  177044 main.go:141] libmachine: (enable-default-cni-171116)     <interface type='network'>
	I1004 02:12:12.389223  177044 main.go:141] libmachine: (enable-default-cni-171116)       <source network='default'/>
	I1004 02:12:12.389235  177044 main.go:141] libmachine: (enable-default-cni-171116)       <model type='virtio'/>
	I1004 02:12:12.389244  177044 main.go:141] libmachine: (enable-default-cni-171116)     </interface>
	I1004 02:12:12.389250  177044 main.go:141] libmachine: (enable-default-cni-171116)     <serial type='pty'>
	I1004 02:12:12.389260  177044 main.go:141] libmachine: (enable-default-cni-171116)       <target port='0'/>
	I1004 02:12:12.389273  177044 main.go:141] libmachine: (enable-default-cni-171116)     </serial>
	I1004 02:12:12.389288  177044 main.go:141] libmachine: (enable-default-cni-171116)     <console type='pty'>
	I1004 02:12:12.389302  177044 main.go:141] libmachine: (enable-default-cni-171116)       <target type='serial' port='0'/>
	I1004 02:12:12.389319  177044 main.go:141] libmachine: (enable-default-cni-171116)     </console>
	I1004 02:12:12.389331  177044 main.go:141] libmachine: (enable-default-cni-171116)     <rng model='virtio'>
	I1004 02:12:12.389347  177044 main.go:141] libmachine: (enable-default-cni-171116)       <backend model='random'>/dev/random</backend>
	I1004 02:12:12.389356  177044 main.go:141] libmachine: (enable-default-cni-171116)     </rng>
	I1004 02:12:12.389363  177044 main.go:141] libmachine: (enable-default-cni-171116)     
	I1004 02:12:12.389375  177044 main.go:141] libmachine: (enable-default-cni-171116)     
	I1004 02:12:12.389390  177044 main.go:141] libmachine: (enable-default-cni-171116)   </devices>
	I1004 02:12:12.389403  177044 main.go:141] libmachine: (enable-default-cni-171116) </domain>
	I1004 02:12:12.389418  177044 main.go:141] libmachine: (enable-default-cni-171116) 
	I1004 02:12:12.393752  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:e5:b6:b7 in network default
	I1004 02:12:12.394406  177044 main.go:141] libmachine: (enable-default-cni-171116) Ensuring networks are active...
	I1004 02:12:12.394436  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:12.395143  177044 main.go:141] libmachine: (enable-default-cni-171116) Ensuring network default is active
	I1004 02:12:12.395471  177044 main.go:141] libmachine: (enable-default-cni-171116) Ensuring network mk-enable-default-cni-171116 is active
	I1004 02:12:12.396057  177044 main.go:141] libmachine: (enable-default-cni-171116) Getting domain xml...
	I1004 02:12:12.396697  177044 main.go:141] libmachine: (enable-default-cni-171116) Creating domain...
	I1004 02:12:09.802995  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:09.803579  175456 main.go:141] libmachine: (custom-flannel-171116) Found IP for machine: 192.168.72.15
	I1004 02:12:09.803610  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has current primary IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:09.803620  175456 main.go:141] libmachine: (custom-flannel-171116) Reserving static IP address...
	I1004 02:12:09.804204  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | unable to find host DHCP lease matching {name: "custom-flannel-171116", mac: "52:54:00:20:01:3b", ip: "192.168.72.15"} in network mk-custom-flannel-171116
	I1004 02:12:09.889860  175456 main.go:141] libmachine: (custom-flannel-171116) Reserved static IP address: 192.168.72.15
	I1004 02:12:09.889894  175456 main.go:141] libmachine: (custom-flannel-171116) Waiting for SSH to be available...
	I1004 02:12:09.889961  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | Getting to WaitForSSH function...
	I1004 02:12:09.892878  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:09.893346  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:minikube Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:09.893385  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:09.893696  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | Using SSH client type: external
	I1004 02:12:09.893724  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/custom-flannel-171116/id_rsa (-rw-------)
	I1004 02:12:09.893758  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.15 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/custom-flannel-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:12:09.893776  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | About to run SSH command:
	I1004 02:12:09.893787  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | exit 0
	I1004 02:12:09.997790  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | SSH cmd err, output: <nil>: 
	I1004 02:12:09.998163  175456 main.go:141] libmachine: (custom-flannel-171116) KVM machine creation complete!
	I1004 02:12:09.998635  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetConfigRaw
	I1004 02:12:09.999249  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:09.999444  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:09.999608  175456 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:12:09.999625  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetState
	I1004 02:12:10.001000  175456 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:12:10.001015  175456 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:12:10.001021  175456 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:12:10.001028  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:10.003474  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.003815  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.003845  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.004011  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:10.004180  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.004350  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.004478  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:10.004627  175456 main.go:141] libmachine: Using SSH client type: native
	I1004 02:12:10.004971  175456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1004 02:12:10.004985  175456 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:12:10.141472  175456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:12:10.141501  175456 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:12:10.141512  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:10.144878  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.145451  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.145492  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.145624  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:10.145855  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.146092  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.146276  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:10.146480  175456 main.go:141] libmachine: Using SSH client type: native
	I1004 02:12:10.146955  175456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1004 02:12:10.146978  175456 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:12:10.286783  175456 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 02:12:10.286858  175456 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:12:10.286873  175456 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:12:10.286887  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetMachineName
	I1004 02:12:10.287145  175456 buildroot.go:166] provisioning hostname "custom-flannel-171116"
	I1004 02:12:10.287175  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetMachineName
	I1004 02:12:10.287377  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:10.290309  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.290658  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.290680  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.290799  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:10.291005  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.291196  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.291352  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:10.291476  175456 main.go:141] libmachine: Using SSH client type: native
	I1004 02:12:10.291834  175456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1004 02:12:10.291849  175456 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-171116 && echo "custom-flannel-171116" | sudo tee /etc/hostname
	I1004 02:12:10.440730  175456 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-171116
	
	I1004 02:12:10.440769  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:10.444533  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.444954  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.445021  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.445152  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:10.445361  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.445534  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.445691  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:10.445900  175456 main.go:141] libmachine: Using SSH client type: native
	I1004 02:12:10.446258  175456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1004 02:12:10.446297  175456 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-171116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-171116/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-171116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:12:10.591503  175456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:12:10.591537  175456 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 02:12:10.591557  175456 buildroot.go:174] setting up certificates
	I1004 02:12:10.591565  175456 provision.go:83] configureAuth start
	I1004 02:12:10.591574  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetMachineName
	I1004 02:12:10.591902  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetIP
	I1004 02:12:10.594676  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.595043  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.595072  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.595260  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:10.598032  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.598413  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.598445  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.598625  175456 provision.go:138] copyHostCerts
	I1004 02:12:10.598716  175456 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 02:12:10.598735  175456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 02:12:10.598806  175456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 02:12:10.598910  175456 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 02:12:10.598920  175456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 02:12:10.598942  175456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 02:12:10.598994  175456 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 02:12:10.599001  175456 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 02:12:10.599020  175456 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 02:12:10.599062  175456 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-171116 san=[192.168.72.15 192.168.72.15 localhost 127.0.0.1 minikube custom-flannel-171116]
	I1004 02:12:10.824753  175456 provision.go:172] copyRemoteCerts
	I1004 02:12:10.824819  175456 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:12:10.824851  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:10.828080  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.828438  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:10.828463  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:10.828684  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:10.828859  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:10.828964  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:10.829050  175456 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/custom-flannel-171116/id_rsa Username:docker}
	I1004 02:12:10.924448  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 02:12:10.950037  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1004 02:12:10.974127  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:12:10.997712  175456 provision.go:86] duration metric: configureAuth took 406.130945ms
	I1004 02:12:10.997744  175456 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:12:10.997949  175456 config.go:182] Loaded profile config "custom-flannel-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:12:10.998039  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:11.000794  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.001209  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.001250  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.001356  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:11.001596  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.001802  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.001942  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:11.002136  175456 main.go:141] libmachine: Using SSH client type: native
	I1004 02:12:11.002438  175456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1004 02:12:11.002452  175456 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:12:11.341658  175456 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:12:11.341688  175456 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:12:11.341700  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetURL
	I1004 02:12:11.343111  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | Using libvirt version 6000000
	I1004 02:12:11.345505  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.345892  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.345919  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.346077  175456 main.go:141] libmachine: Docker is up and running!
	I1004 02:12:11.346095  175456 main.go:141] libmachine: Reticulating splines...
	I1004 02:12:11.346105  175456 client.go:171] LocalClient.Create took 25.780762783s
	I1004 02:12:11.346133  175456 start.go:167] duration metric: libmachine.API.Create for "custom-flannel-171116" took 25.78083474s
	I1004 02:12:11.346146  175456 start.go:300] post-start starting for "custom-flannel-171116" (driver="kvm2")
	I1004 02:12:11.346160  175456 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:12:11.346187  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:11.346447  175456 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:12:11.346469  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:11.348602  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.348922  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.348950  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.349156  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:11.349348  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.349499  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:11.349705  175456 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/custom-flannel-171116/id_rsa Username:docker}
	I1004 02:12:11.448190  175456 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:12:11.452694  175456 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 02:12:11.452723  175456 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 02:12:11.452796  175456 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 02:12:11.452914  175456 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 02:12:11.453032  175456 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 02:12:11.462452  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:12:11.485414  175456 start.go:303] post-start completed in 139.247358ms
	I1004 02:12:11.485473  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetConfigRaw
	I1004 02:12:11.486402  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetIP
	I1004 02:12:11.489685  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.490074  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.490106  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.490358  175456 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/config.json ...
	I1004 02:12:11.490554  175456 start.go:128] duration metric: createHost completed in 25.946840553s
	I1004 02:12:11.490584  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:11.493007  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.493367  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.493399  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.493532  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:11.493747  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.493970  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.494120  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:11.494310  175456 main.go:141] libmachine: Using SSH client type: native
	I1004 02:12:11.494661  175456 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.15 22 <nil> <nil>}
	I1004 02:12:11.494673  175456 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 02:12:11.626798  175456 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696385531.601111109
	
	I1004 02:12:11.626846  175456 fix.go:206] guest clock: 1696385531.601111109
	I1004 02:12:11.626857  175456 fix.go:219] Guest: 2023-10-04 02:12:11.601111109 +0000 UTC Remote: 2023-10-04 02:12:11.490568085 +0000 UTC m=+38.832735325 (delta=110.543024ms)
	I1004 02:12:11.626904  175456 fix.go:190] guest clock delta is within tolerance: 110.543024ms
	I1004 02:12:11.626909  175456 start.go:83] releasing machines lock for "custom-flannel-171116", held for 26.083382608s
	I1004 02:12:11.626941  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:11.627291  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetIP
	I1004 02:12:11.630333  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.630774  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.630813  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.630975  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:11.631589  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:11.631798  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .DriverName
	I1004 02:12:11.631891  175456 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:12:11.631945  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:11.632060  175456 ssh_runner.go:195] Run: cat /version.json
	I1004 02:12:11.632093  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHHostname
	I1004 02:12:11.635053  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.635388  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.635494  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.635530  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.635690  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:11.635885  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.635940  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:11.635995  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:11.636047  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:11.636176  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHPort
	I1004 02:12:11.636261  175456 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/custom-flannel-171116/id_rsa Username:docker}
	I1004 02:12:11.636549  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHKeyPath
	I1004 02:12:11.636738  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetSSHUsername
	I1004 02:12:11.636885  175456 sshutil.go:53] new ssh client: &{IP:192.168.72.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/custom-flannel-171116/id_rsa Username:docker}
	I1004 02:12:11.760990  175456 ssh_runner.go:195] Run: systemctl --version
	I1004 02:12:11.770064  175456 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:12:11.937899  175456 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:12:11.944647  175456 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:12:11.944730  175456 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:12:11.960207  175456 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:12:11.960230  175456 start.go:469] detecting cgroup driver to use...
	I1004 02:12:11.960293  175456 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:12:11.976135  175456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:12:11.989949  175456 docker.go:197] disabling cri-docker service (if available) ...
	I1004 02:12:11.990015  175456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:12:12.004083  175456 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:12:12.017046  175456 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:12:12.128623  175456 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:12:12.263189  175456 docker.go:213] disabling docker service ...
	I1004 02:12:12.263266  175456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:12:12.281024  175456 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:12:12.294469  175456 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:12:12.430942  175456 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:12:12.559849  175456 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:12:12.574304  175456 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:12:12.592462  175456 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 02:12:12.592529  175456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:12:12.602329  175456 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:12:12.602403  175456 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:12:12.612130  175456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:12:12.623659  175456 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:12:12.635999  175456 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:12:12.646003  175456 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:12:12.654859  175456 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:12:12.654962  175456 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:12:12.668312  175456 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:12:12.677139  175456 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:12:12.790341  175456 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:12:13.000038  175456 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:12:13.000111  175456 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:12:13.007951  175456 start.go:537] Will wait 60s for crictl version
	I1004 02:12:13.008012  175456 ssh_runner.go:195] Run: which crictl
	I1004 02:12:13.012073  175456 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:12:13.052190  175456 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 02:12:13.052289  175456 ssh_runner.go:195] Run: crio --version
	I1004 02:12:13.113422  175456 ssh_runner.go:195] Run: crio --version
	I1004 02:12:13.171345  175456 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 02:12:09.855888  174058 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.071994645s)
	I1004 02:12:09.855941  174058 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:12:09.856038  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:09.856055  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=calico-171116 minikube.k8s.io/updated_at=2023_10_04T02_12_09_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:09.869793  174058 ops.go:34] apiserver oom_adj: -16
	I1004 02:12:10.023605  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:10.138392  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:10.731957  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:11.232322  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:11.731984  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:12.231605  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:12.731381  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:13.231422  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:13.732347  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:13.953887  177044 main.go:141] libmachine: (enable-default-cni-171116) Waiting to get IP...
	I1004 02:12:13.954857  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:13.955533  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:13.955563  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:13.955497  177121 retry.go:31] will retry after 229.416138ms: waiting for machine to come up
	I1004 02:12:14.187335  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:14.188076  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:14.188108  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:14.187980  177121 retry.go:31] will retry after 257.774293ms: waiting for machine to come up
	I1004 02:12:14.447658  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:14.448530  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:14.448561  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:14.448407  177121 retry.go:31] will retry after 325.519906ms: waiting for machine to come up
	I1004 02:12:14.776131  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:14.776948  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:14.776985  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:14.776904  177121 retry.go:31] will retry after 495.229172ms: waiting for machine to come up
	I1004 02:12:15.273983  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:15.274714  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:15.274762  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:15.274655  177121 retry.go:31] will retry after 659.422214ms: waiting for machine to come up
	I1004 02:12:15.936271  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:15.936919  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:15.936959  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:15.936830  177121 retry.go:31] will retry after 675.543984ms: waiting for machine to come up
	I1004 02:12:16.613808  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:16.614427  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:16.614515  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:16.614358  177121 retry.go:31] will retry after 1.141934287s: waiting for machine to come up
	I1004 02:12:13.172884  175456 main.go:141] libmachine: (custom-flannel-171116) Calling .GetIP
	I1004 02:12:13.176316  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:13.176754  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:01:3b", ip: ""} in network mk-custom-flannel-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:12:04 +0000 UTC Type:0 Mac:52:54:00:20:01:3b Iaid: IPaddr:192.168.72.15 Prefix:24 Hostname:custom-flannel-171116 Clientid:01:52:54:00:20:01:3b}
	I1004 02:12:13.176789  175456 main.go:141] libmachine: (custom-flannel-171116) DBG | domain custom-flannel-171116 has defined IP address 192.168.72.15 and MAC address 52:54:00:20:01:3b in network mk-custom-flannel-171116
	I1004 02:12:13.177063  175456 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 02:12:13.181701  175456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:12:13.196932  175456 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:12:13.196983  175456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:12:13.233958  175456 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 02:12:13.234022  175456 ssh_runner.go:195] Run: which lz4
	I1004 02:12:13.238587  175456 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 02:12:13.242869  175456 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:12:13.242905  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 02:12:15.213310  175456 crio.go:444] Took 1.974751 seconds to copy over tarball
	I1004 02:12:15.213397  175456 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:12:14.232282  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:14.733753  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:15.232155  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:15.731891  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:16.232059  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:16.731718  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:17.231794  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:17.731455  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:18.232107  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:18.732071  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:19.231899  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:19.731886  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:20.231584  174058 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:12:20.384552  174058 kubeadm.go:1081] duration metric: took 10.528562225s to wait for elevateKubeSystemPrivileges.
	I1004 02:12:20.384590  174058 kubeadm.go:406] StartCluster complete in 26.420057589s
	I1004 02:12:20.384614  174058 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:20.384700  174058 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:12:20.385965  174058 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:20.386304  174058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:12:20.386425  174058 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:12:20.386501  174058 config.go:182] Loaded profile config "calico-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:12:20.386504  174058 addons.go:69] Setting storage-provisioner=true in profile "calico-171116"
	I1004 02:12:20.386524  174058 addons.go:231] Setting addon storage-provisioner=true in "calico-171116"
	I1004 02:12:20.386533  174058 addons.go:69] Setting default-storageclass=true in profile "calico-171116"
	I1004 02:12:20.386559  174058 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-171116"
	I1004 02:12:20.386580  174058 host.go:66] Checking if "calico-171116" exists ...
	I1004 02:12:20.386955  174058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:12:20.386971  174058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:12:20.387008  174058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:12:20.387029  174058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:12:20.407435  174058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46611
	I1004 02:12:20.408052  174058 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:12:20.408674  174058 main.go:141] libmachine: Using API Version  1
	I1004 02:12:20.408694  174058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:12:20.409158  174058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I1004 02:12:20.409221  174058 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:12:20.409536  174058 main.go:141] libmachine: (calico-171116) Calling .GetState
	I1004 02:12:20.409932  174058 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:12:20.410673  174058 main.go:141] libmachine: Using API Version  1
	I1004 02:12:20.410695  174058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:12:20.411706  174058 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:12:20.412441  174058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:12:20.412478  174058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:12:20.413921  174058 addons.go:231] Setting addon default-storageclass=true in "calico-171116"
	I1004 02:12:20.413967  174058 host.go:66] Checking if "calico-171116" exists ...
	I1004 02:12:20.414351  174058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:12:20.414381  174058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:12:20.433506  174058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I1004 02:12:20.434151  174058 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:12:20.434750  174058 main.go:141] libmachine: Using API Version  1
	I1004 02:12:20.434777  174058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:12:20.435203  174058 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:12:20.435396  174058 main.go:141] libmachine: (calico-171116) Calling .GetState
	I1004 02:12:20.436557  174058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40791
	I1004 02:12:20.436943  174058 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:12:20.437393  174058 main.go:141] libmachine: (calico-171116) Calling .DriverName
	I1004 02:12:20.437530  174058 main.go:141] libmachine: Using API Version  1
	I1004 02:12:20.437549  174058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:12:20.439656  174058 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:12:20.437985  174058 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:12:20.441254  174058 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:12:20.441270  174058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:12:20.441291  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHHostname
	I1004 02:12:20.441795  174058 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:12:20.441819  174058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:12:20.445060  174058 main.go:141] libmachine: (calico-171116) DBG | domain calico-171116 has defined MAC address 52:54:00:59:30:ad in network mk-calico-171116
	I1004 02:12:20.445489  174058 main.go:141] libmachine: (calico-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:30:ad", ip: ""} in network mk-calico-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:11:36 +0000 UTC Type:0 Mac:52:54:00:59:30:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:calico-171116 Clientid:01:52:54:00:59:30:ad}
	I1004 02:12:20.445520  174058 main.go:141] libmachine: (calico-171116) DBG | domain calico-171116 has defined IP address 192.168.50.145 and MAC address 52:54:00:59:30:ad in network mk-calico-171116
	I1004 02:12:20.445773  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHPort
	I1004 02:12:20.446007  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHKeyPath
	I1004 02:12:20.446177  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHUsername
	I1004 02:12:20.446291  174058 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/calico-171116/id_rsa Username:docker}
	I1004 02:12:20.462733  174058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46301
	I1004 02:12:20.463512  174058 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:12:20.464148  174058 main.go:141] libmachine: Using API Version  1
	I1004 02:12:20.464171  174058 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:12:20.464536  174058 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:12:20.464774  174058 main.go:141] libmachine: (calico-171116) Calling .GetState
	I1004 02:12:20.466743  174058 main.go:141] libmachine: (calico-171116) Calling .DriverName
	I1004 02:12:20.467148  174058 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:12:20.467168  174058 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:12:20.467186  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHHostname
	I1004 02:12:20.470505  174058 main.go:141] libmachine: (calico-171116) DBG | domain calico-171116 has defined MAC address 52:54:00:59:30:ad in network mk-calico-171116
	I1004 02:12:20.471029  174058 main.go:141] libmachine: (calico-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:30:ad", ip: ""} in network mk-calico-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:11:36 +0000 UTC Type:0 Mac:52:54:00:59:30:ad Iaid: IPaddr:192.168.50.145 Prefix:24 Hostname:calico-171116 Clientid:01:52:54:00:59:30:ad}
	I1004 02:12:20.471120  174058 main.go:141] libmachine: (calico-171116) DBG | domain calico-171116 has defined IP address 192.168.50.145 and MAC address 52:54:00:59:30:ad in network mk-calico-171116
	I1004 02:12:20.471428  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHPort
	I1004 02:12:20.471762  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHKeyPath
	I1004 02:12:20.471953  174058 main.go:141] libmachine: (calico-171116) Calling .GetSSHUsername
	I1004 02:12:20.472152  174058 sshutil.go:53] new ssh client: &{IP:192.168.50.145 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/calico-171116/id_rsa Username:docker}
	I1004 02:12:20.490675  174058 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-171116" context rescaled to 1 replicas
	I1004 02:12:20.490718  174058 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.145 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:12:20.493206  174058 out.go:177] * Verifying Kubernetes components...
	I1004 02:12:18.815184  175456 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.601752948s)
	I1004 02:12:18.815217  175456 crio.go:451] Took 3.601875 seconds to extract the tarball
	I1004 02:12:18.815226  175456 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:12:18.861040  175456 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:12:18.937106  175456 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 02:12:18.937127  175456 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:12:18.937208  175456 ssh_runner.go:195] Run: crio config
	I1004 02:12:19.004716  175456 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1004 02:12:19.004766  175456 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 02:12:19.004792  175456 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.15 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-171116 NodeName:custom-flannel-171116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:12:19.004971  175456 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-171116"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:12:19.005063  175456 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=custom-flannel-171116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:custom-flannel-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:}
	I1004 02:12:19.005126  175456 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 02:12:19.017262  175456 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:12:19.017341  175456 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:12:19.028031  175456 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1004 02:12:19.045561  175456 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:12:19.064905  175456 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1004 02:12:19.082988  175456 ssh_runner.go:195] Run: grep 192.168.72.15	control-plane.minikube.internal$ /etc/hosts
	I1004 02:12:19.087196  175456 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:12:19.100658  175456 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116 for IP: 192.168.72.15
	I1004 02:12:19.100707  175456 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.100873  175456 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 02:12:19.100911  175456 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 02:12:19.100950  175456 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/client.key
	I1004 02:12:19.100963  175456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/client.crt with IP's: []
	I1004 02:12:19.273650  175456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/client.crt ...
	I1004 02:12:19.273696  175456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/client.crt: {Name:mkcf2a1d6aff9eb4cd7d8acf02dd123e3f881504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.273965  175456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/client.key ...
	I1004 02:12:19.273992  175456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/client.key: {Name:mk8cf08beed60726de05151f56e3d361e5dd1843 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.274126  175456 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.key.9d44ac2f
	I1004 02:12:19.274144  175456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.crt.9d44ac2f with IP's: [192.168.72.15 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 02:12:19.484751  175456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.crt.9d44ac2f ...
	I1004 02:12:19.484782  175456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.crt.9d44ac2f: {Name:mk21840baecb55b4f15d52a5e72e9588248d13d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.484951  175456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.key.9d44ac2f ...
	I1004 02:12:19.484963  175456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.key.9d44ac2f: {Name:mk7fdcd474e85cab3bc044c8d39347d8acf681b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.485028  175456 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.crt.9d44ac2f -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.crt
	I1004 02:12:19.485101  175456 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.key.9d44ac2f -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.key
	I1004 02:12:19.485157  175456 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.key
	I1004 02:12:19.485179  175456 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.crt with IP's: []
	I1004 02:12:19.701404  175456 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.crt ...
	I1004 02:12:19.701449  175456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.crt: {Name:mk6f78824195b4e8539e26c650ea72e19f9ac94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.701670  175456 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.key ...
	I1004 02:12:19.701696  175456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.key: {Name:mkd518204b3615b07aae0172089f53ae3cf088da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:12:19.701974  175456 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 02:12:19.702039  175456 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 02:12:19.702054  175456 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 02:12:19.702105  175456 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 02:12:19.702148  175456 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:12:19.702180  175456 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 02:12:19.702243  175456 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:12:19.703056  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 02:12:19.735661  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:12:19.767324  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:12:19.796439  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/custom-flannel-171116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 02:12:19.829332  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:12:19.858667  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:12:19.886317  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:12:19.916440  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:12:19.945481  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:12:19.972952  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 02:12:20.000820  175456 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 02:12:20.025698  175456 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:12:20.043515  175456 ssh_runner.go:195] Run: openssl version
	I1004 02:12:20.050278  175456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 02:12:20.062984  175456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 02:12:20.068122  175456 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 02:12:20.068189  175456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 02:12:20.076059  175456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 02:12:20.087082  175456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:12:20.098461  175456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:12:20.103742  175456 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:12:20.103823  175456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:12:20.110486  175456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:12:20.123103  175456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 02:12:20.135000  175456 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 02:12:20.140616  175456 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 02:12:20.140690  175456 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 02:12:20.147078  175456 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 02:12:20.158767  175456 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 02:12:20.163859  175456 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 02:12:20.163911  175456 kubeadm.go:404] StartCluster: {Name:custom-flannel-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.2 ClusterName:custom-flannel-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.15 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:12:20.164004  175456 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:12:20.164060  175456 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:12:20.217192  175456 cri.go:89] found id: ""
	I1004 02:12:20.217277  175456 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:12:20.229375  175456 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:12:20.242478  175456 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:12:20.255705  175456 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:12:20.255754  175456 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:12:20.318548  175456 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:12:20.318696  175456 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:12:20.509244  175456 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:12:20.509378  175456 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:12:20.509484  175456 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:12:20.838148  175456 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:12:17.758410  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:17.758965  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:17.758992  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:17.758899  177121 retry.go:31] will retry after 1.057744961s: waiting for machine to come up
	I1004 02:12:18.817939  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:18.818522  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:18.818550  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:18.818463  177121 retry.go:31] will retry after 1.468785946s: waiting for machine to come up
	I1004 02:12:20.289228  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:20.289886  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:20.289907  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:20.289816  177121 retry.go:31] will retry after 2.122167214s: waiting for machine to come up
	I1004 02:12:22.413409  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:22.414099  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:22.414125  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:22.414037  177121 retry.go:31] will retry after 2.143433982s: waiting for machine to come up
	I1004 02:12:20.841196  175456 out.go:204]   - Generating certificates and keys ...
	I1004 02:12:20.841313  175456 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:12:20.841419  175456 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:12:21.103489  175456 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:12:21.168214  175456 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:12:21.289992  175456 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:12:21.558827  175456 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 02:12:21.797580  175456 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 02:12:21.797806  175456 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-171116 localhost] and IPs [192.168.72.15 127.0.0.1 ::1]
	I1004 02:12:21.899852  175456 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 02:12:21.900065  175456 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-171116 localhost] and IPs [192.168.72.15 127.0.0.1 ::1]
	I1004 02:12:22.334002  175456 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:12:22.501351  175456 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:12:22.633262  175456 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 02:12:22.633369  175456 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:12:23.064071  175456 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:12:23.166468  175456 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:12:23.367125  175456 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:12:23.503834  175456 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:12:23.504914  175456 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:12:23.507575  175456 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:12:20.494619  174058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:12:20.657350  174058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:12:20.677584  174058 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:12:20.756554  174058 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:12:20.757635  174058 node_ready.go:35] waiting up to 15m0s for node "calico-171116" to be "Ready" ...
	I1004 02:12:24.074517  174058 node_ready.go:58] node "calico-171116" has status "Ready":"False"
	I1004 02:12:24.149916  174058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.472291876s)
	I1004 02:12:24.149963  174058 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.393366189s)
	I1004 02:12:24.149991  174058 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 02:12:24.149993  174058 main.go:141] libmachine: Making call to close driver server
	I1004 02:12:24.150009  174058 main.go:141] libmachine: (calico-171116) Calling .Close
	I1004 02:12:24.150035  174058 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.49264986s)
	I1004 02:12:24.150077  174058 main.go:141] libmachine: Making call to close driver server
	I1004 02:12:24.150091  174058 main.go:141] libmachine: (calico-171116) Calling .Close
	I1004 02:12:24.150355  174058 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:12:24.150372  174058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:12:24.150382  174058 main.go:141] libmachine: Making call to close driver server
	I1004 02:12:24.150390  174058 main.go:141] libmachine: (calico-171116) Calling .Close
	I1004 02:12:24.150528  174058 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:12:24.150551  174058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:12:24.150565  174058 main.go:141] libmachine: Making call to close driver server
	I1004 02:12:24.150575  174058 main.go:141] libmachine: (calico-171116) Calling .Close
	I1004 02:12:24.150717  174058 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:12:24.150734  174058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:12:24.150745  174058 main.go:141] libmachine: (calico-171116) DBG | Closing plugin on server side
	I1004 02:12:24.152476  174058 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:12:24.152503  174058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:12:24.171408  174058 main.go:141] libmachine: Making call to close driver server
	I1004 02:12:24.171442  174058 main.go:141] libmachine: (calico-171116) Calling .Close
	I1004 02:12:24.171795  174058 main.go:141] libmachine: (calico-171116) DBG | Closing plugin on server side
	I1004 02:12:24.173617  174058 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:12:24.173639  174058 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:12:24.176374  174058 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1004 02:12:24.178390  174058 addons.go:502] enable addons completed in 3.791945459s: enabled=[storage-provisioner default-storageclass]
	I1004 02:12:24.559637  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:24.560267  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:24.560294  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:24.560189  177121 retry.go:31] will retry after 3.220873003s: waiting for machine to come up
	I1004 02:12:23.584013  175456 out.go:204]   - Booting up control plane ...
	I1004 02:12:23.584233  175456 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:12:23.584361  175456 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:12:23.584463  175456 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:12:23.584636  175456 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:12:23.584733  175456 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:12:23.584786  175456 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:12:23.682829  175456 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:12:26.553185  174058 node_ready.go:58] node "calico-171116" has status "Ready":"False"
	I1004 02:12:29.053039  174058 node_ready.go:58] node "calico-171116" has status "Ready":"False"
	I1004 02:12:27.782636  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:27.783305  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:27.783339  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:27.783219  177121 retry.go:31] will retry after 4.193652103s: waiting for machine to come up
	I1004 02:12:31.978824  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | domain enable-default-cni-171116 has defined MAC address 52:54:00:be:5c:41 in network mk-enable-default-cni-171116
	I1004 02:12:31.979404  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | unable to find current IP address of domain enable-default-cni-171116 in network mk-enable-default-cni-171116
	I1004 02:12:31.979439  177044 main.go:141] libmachine: (enable-default-cni-171116) DBG | I1004 02:12:31.979324  177121 retry.go:31] will retry after 3.613366453s: waiting for machine to come up
	I1004 02:12:32.684319  175456 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004159 seconds
	I1004 02:12:32.684484  175456 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:12:32.703484  175456 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:12:33.232815  175456 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:12:33.233119  175456 kubeadm.go:322] [mark-control-plane] Marking the node custom-flannel-171116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:12:33.748859  175456 kubeadm.go:322] [bootstrap-token] Using token: hxkz4u.hrnhlshdv4zishnm
	I1004 02:12:33.750358  175456 out.go:204]   - Configuring RBAC rules ...
	I1004 02:12:33.750501  175456 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:12:33.756887  175456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:12:33.774337  175456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:12:33.780474  175456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:12:33.785100  175456 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:12:33.793825  175456 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:12:33.821778  175456 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:12:34.112510  175456 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:12:34.164327  175456 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:12:34.165321  175456 kubeadm.go:322] 
	I1004 02:12:34.165433  175456 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:12:34.165462  175456 kubeadm.go:322] 
	I1004 02:12:34.165568  175456 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:12:34.165579  175456 kubeadm.go:322] 
	I1004 02:12:34.165612  175456 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:12:34.165709  175456 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:12:34.165786  175456 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:12:34.165803  175456 kubeadm.go:322] 
	I1004 02:12:34.165885  175456 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:12:34.165896  175456 kubeadm.go:322] 
	I1004 02:12:34.165965  175456 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:12:34.165975  175456 kubeadm.go:322] 
	I1004 02:12:34.166035  175456 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:12:34.166109  175456 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:12:34.166170  175456 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:12:34.166176  175456 kubeadm.go:322] 
	I1004 02:12:34.166243  175456 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:12:34.166336  175456 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:12:34.166352  175456 kubeadm.go:322] 
	I1004 02:12:34.166463  175456 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token hxkz4u.hrnhlshdv4zishnm \
	I1004 02:12:34.166640  175456 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:12:34.166664  175456 kubeadm.go:322] 	--control-plane 
	I1004 02:12:34.166673  175456 kubeadm.go:322] 
	I1004 02:12:34.166814  175456 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:12:34.166829  175456 kubeadm.go:322] 
	I1004 02:12:34.166955  175456 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token hxkz4u.hrnhlshdv4zishnm \
	I1004 02:12:34.167109  175456 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:12:34.167730  175456 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:12:34.167778  175456 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1004 02:12:34.169712  175456 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1004 02:12:31.054776  174058 node_ready.go:58] node "calico-171116" has status "Ready":"False"
	I1004 02:12:32.553125  174058 node_ready.go:49] node "calico-171116" has status "Ready":"True"
	I1004 02:12:32.553154  174058 node_ready.go:38] duration metric: took 11.795491602s waiting for node "calico-171116" to be "Ready" ...
	I1004 02:12:32.553166  174058 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:12:32.562793  174058 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-7ddc4f45bc-jzmqk" in "kube-system" namespace to be "Ready" ...
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:58:06 UTC, ends at Wed 2023-10-04 02:12:37 UTC. --
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.250775631Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385557250752230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d216430d-8fd0-4dc2-a58b-3dd6285bb827 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.251836785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=27ffc00c-bb1c-4d11-939e-74918ee38412 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.251910067Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=27ffc00c-bb1c-4d11-939e-74918ee38412 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.252069989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=27ffc00c-bb1c-4d11-939e-74918ee38412 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.297505119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3d873863-ef32-4892-9471-26c0673b5fcf name=/runtime.v1.RuntimeService/Version
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.297584281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3d873863-ef32-4892-9471-26c0673b5fcf name=/runtime.v1.RuntimeService/Version
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.299296328Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c7f362b9-add9-4c30-b379-6f07e9dc4759 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.300111179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385557300094332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c7f362b9-add9-4c30-b379-6f07e9dc4759 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.301083491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ef90bdfa-a9e1-4e02-b2b3-fb63691334c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.301295908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ef90bdfa-a9e1-4e02-b2b3-fb63691334c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.301523149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ef90bdfa-a9e1-4e02-b2b3-fb63691334c3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.350988793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=08dac721-74e9-4701-bc0b-7ba869541900 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.351085685Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=08dac721-74e9-4701-bc0b-7ba869541900 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.352013127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=af40e293-66f0-4ff2-8377-e56dc9181770 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.352523017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385557352507982,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=af40e293-66f0-4ff2-8377-e56dc9181770 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.353026121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4420dc6c-e580-4f9e-8fdc-2aa476958ff5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.353105528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4420dc6c-e580-4f9e-8fdc-2aa476958ff5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.353407535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4420dc6c-e580-4f9e-8fdc-2aa476958ff5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.394527415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8cd7d1e9-4cc3-409b-a65a-d0bf0f49aeed name=/runtime.v1.RuntimeService/Version
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.394635975Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8cd7d1e9-4cc3-409b-a65a-d0bf0f49aeed name=/runtime.v1.RuntimeService/Version
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.396839818Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8ad47901-96cd-4505-ad18-bcb8e5861b89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.397332608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385557397314378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8ad47901-96cd-4505-ad18-bcb8e5861b89 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.397797408Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=392acde5-3d5e-4776-ad71-51b4f3f2b521 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.397875746Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=392acde5-3d5e-4776-ad71-51b4f3f2b521 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:12:37 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:12:37.398041996Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=392acde5-3d5e-4776-ad71-51b4f3f2b521 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e68832fdc1a10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   43c3765fb4461       storage-provisioner
	e79ebe90e174e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   86f412782b2de       coredns-5dd5756b68-gjn6v
	4cfac8b575d61       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   9 minutes ago       Running             kube-proxy                0                   94996d05e0580       kube-proxy-b5ltp
	b11685bcb8d2c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   d6951eb8f9820       etcd-default-k8s-diff-port-239802
	61f2aacf5ae30       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   9 minutes ago       Running             kube-scheduler            2                   5f9ec4325c8eb       kube-scheduler-default-k8s-diff-port-239802
	7eb2c7cdd906b       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   9 minutes ago       Running             kube-controller-manager   2                   60c3ef856ec9e       kube-controller-manager-default-k8s-diff-port-239802
	88b798cbf497b       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   9 minutes ago       Running             kube-apiserver            2                   8a0cd1daa0fce       kube-apiserver-default-k8s-diff-port-239802
	
	* 
	* ==> coredns [e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44913 - 9724 "HINFO IN 2172030799814606730.8629516611717317443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.103130129s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-239802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-239802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=default-k8s-diff-port-239802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T02_03_17_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 02:03:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-239802
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 02:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:08:44 +0000   Wed, 04 Oct 2023 02:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:08:44 +0000   Wed, 04 Oct 2023 02:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:08:44 +0000   Wed, 04 Oct 2023 02:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:08:44 +0000   Wed, 04 Oct 2023 02:03:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.105
	  Hostname:    default-k8s-diff-port-239802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e9ba040ccf943748a952ffe0b1f0c13
	  System UUID:                7e9ba040-ccf9-4374-8a95-2ffe0b1f0c13
	  Boot ID:                    faf6834d-b499-4d93-a0d5-ecbdb74af482
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gjn6v                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-239802                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-239802             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-239802    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-b5ltp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-default-k8s-diff-port-239802             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-c5ww7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node default-k8s-diff-port-239802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node default-k8s-diff-port-239802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node default-k8s-diff-port-239802 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m20s  kubelet          Node default-k8s-diff-port-239802 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m10s  kubelet          Node default-k8s-diff-port-239802 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node default-k8s-diff-port-239802 event: Registered Node default-k8s-diff-port-239802 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074347] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 4 01:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.493660] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160386] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.530179] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.901548] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.124596] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.137925] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.098545] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.221220] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +17.220280] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[ +20.171343] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 4 02:03] systemd-fstab-generator[3506]: Ignoring "noauto" for root device
	[  +9.293636] systemd-fstab-generator[3828]: Ignoring "noauto" for root device
	[ +14.403595] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 02:12] hrtimer: interrupt took 1794197 ns
	
	* 
	* ==> etcd [b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41] <==
	* {"level":"info","ts":"2023-10-04T02:03:12.458202Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"256dbd8251ce3905","local-member-attributes":"{Name:default-k8s-diff-port-239802 ClientURLs:[https://192.168.61.105:2379]}","request-path":"/0/members/256dbd8251ce3905/attributes","cluster-id":"80e1112f02df7d72","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T02:03:12.458294Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T02:03:12.458473Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T02:03:12.459842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.105:2379"}
	{"level":"info","ts":"2023-10-04T02:03:12.46011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T02:03:12.460208Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-04T02:03:12.46028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T02:03:12.460468Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T02:03:12.463959Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e1112f02df7d72","local-member-id":"256dbd8251ce3905","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T02:03:12.464046Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T02:03:12.464067Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-04T02:10:21.005488Z","caller":"traceutil/trace.go:171","msg":"trace[308928883] transaction","detail":"{read_only:false; response_revision:789; number_of_response:1; }","duration":"149.604618ms","start":"2023-10-04T02:10:20.855832Z","end":"2023-10-04T02:10:21.005436Z","steps":["trace[308928883] 'process raft request'  (duration: 149.479898ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:21.607061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"397.414836ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:10:21.607324Z","caller":"traceutil/trace.go:171","msg":"trace[1312185474] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:789; }","duration":"397.801465ms","start":"2023-10-04T02:10:21.209491Z","end":"2023-10-04T02:10:21.607293Z","steps":["trace[1312185474] 'range keys from in-memory index tree'  (duration: 397.264244ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:21.607429Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T02:10:21.209478Z","time spent":"397.926923ms","remote":"127.0.0.1:60898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2023-10-04T02:10:21.608041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.376015ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108843034626627308 > lease_revoke:<id:39058af86cc216a7>","response":"size:28"}
	{"level":"warn","ts":"2023-10-04T02:10:21.930458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.040536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:10:21.930564Z","caller":"traceutil/trace.go:171","msg":"trace[1078186917] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:789; }","duration":"162.162007ms","start":"2023-10-04T02:10:21.768385Z","end":"2023-10-04T02:10:21.930547Z","steps":["trace[1078186917] 'range keys from in-memory index tree'  (duration: 161.949243ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:10:49.304099Z","caller":"traceutil/trace.go:171","msg":"trace[706154042] transaction","detail":"{read_only:false; response_revision:812; number_of_response:1; }","duration":"107.639501ms","start":"2023-10-04T02:10:49.196442Z","end":"2023-10-04T02:10:49.304082Z","steps":["trace[706154042] 'process raft request'  (duration: 107.265393ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:10:51.506223Z","caller":"traceutil/trace.go:171","msg":"trace[60450801] transaction","detail":"{read_only:false; response_revision:813; number_of_response:1; }","duration":"192.162285ms","start":"2023-10-04T02:10:51.314046Z","end":"2023-10-04T02:10:51.506208Z","steps":["trace[60450801] 'process raft request'  (duration: 191.897305ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:11:54.255489Z","caller":"traceutil/trace.go:171","msg":"trace[299668416] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"262.75889ms","start":"2023-10-04T02:11:53.992685Z","end":"2023-10-04T02:11:54.255443Z","steps":["trace[299668416] 'process raft request'  (duration: 262.329094ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:12:16.091555Z","caller":"traceutil/trace.go:171","msg":"trace[699265712] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"125.879874ms","start":"2023-10-04T02:12:15.965649Z","end":"2023-10-04T02:12:16.091529Z","steps":["trace[699265712] 'process raft request'  (duration: 63.97785ms)","trace[699265712] 'compare'  (duration: 61.73589ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-04T02:12:16.411507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.14344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:12:16.411625Z","caller":"traceutil/trace.go:171","msg":"trace[758708642] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:880; }","duration":"204.349185ms","start":"2023-10-04T02:12:16.207249Z","end":"2023-10-04T02:12:16.411598Z","steps":["trace[758708642] 'range keys from in-memory index tree'  (duration: 204.017345ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:12:16.606884Z","caller":"traceutil/trace.go:171","msg":"trace[1300432100] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"188.250508ms","start":"2023-10-04T02:12:16.418612Z","end":"2023-10-04T02:12:16.606863Z","steps":["trace[1300432100] 'process raft request'  (duration: 188.02898ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  02:12:37 up 14 min,  0 users,  load average: 0.16, 0.26, 0.21
	Linux default-k8s-diff-port-239802 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd] <==
	* W1004 02:08:15.054381       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:08:15.054487       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:08:15.054495       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:08:15.054406       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:08:15.054585       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:08:15.055516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:09:13.948995       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:09:15.055252       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:09:15.055354       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:09:15.055366       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:09:15.056372       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:09:15.056440       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:09:15.056469       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:10:13.949630       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 02:11:13.948367       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:11:15.056399       1 handler_proxy.go:93] no RequestInfo found in the context
	W1004 02:11:15.056579       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:11:15.056820       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:11:15.056864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1004 02:11:15.056788       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:11:15.058601       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:12:13.949011       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece] <==
	* I1004 02:07:07.689663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="227.453µs"
	E1004 02:07:29.237649       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:07:29.707652       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:07:59.243975       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:07:59.718583       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:08:29.249779       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:08:29.727924       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:08:59.256175       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:08:59.738697       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:09:29.266730       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:29.748991       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:09:41.705718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="588.573µs"
	I1004 02:09:53.699844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.282µs"
	E1004 02:09:59.272358       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:59.777036       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:10:29.278661       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:10:29.787504       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:10:59.286229       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:10:59.797546       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:11:29.296748       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:11:29.812893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:11:59.305690       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:11:59.831749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:12:29.314720       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:12:29.847512       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9] <==
	* I1004 02:03:31.371444       1 server_others.go:69] "Using iptables proxy"
	I1004 02:03:31.408306       1 node.go:141] Successfully retrieved node IP: 192.168.61.105
	I1004 02:03:31.486983       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 02:03:31.487087       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 02:03:31.491388       1 server_others.go:152] "Using iptables Proxier"
	I1004 02:03:31.491506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 02:03:31.491762       1 server.go:846] "Version info" version="v1.28.2"
	I1004 02:03:31.491989       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:03:31.492933       1 config.go:188] "Starting service config controller"
	I1004 02:03:31.492982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 02:03:31.493030       1 config.go:97] "Starting endpoint slice config controller"
	I1004 02:03:31.493049       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 02:03:31.493651       1 config.go:315] "Starting node config controller"
	I1004 02:03:31.493697       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 02:03:31.594105       1 shared_informer.go:318] Caches are synced for node config
	I1004 02:03:31.594301       1 shared_informer.go:318] Caches are synced for service config
	I1004 02:03:31.594320       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279] <==
	* W1004 02:03:14.132885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:03:14.132918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 02:03:14.994915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 02:03:14.994976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 02:03:15.071547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 02:03:15.071675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 02:03:15.181003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 02:03:15.181112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 02:03:15.249361       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 02:03:15.249464       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 02:03:15.368338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 02:03:15.368473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 02:03:15.407176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:03:15.407251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 02:03:15.413005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 02:03:15.413062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 02:03:15.413187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:03:15.413199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 02:03:15.419407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:03:15.419460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 02:03:15.420594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 02:03:15.420778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 02:03:15.479519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 02:03:15.479609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1004 02:03:17.807023       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:58:06 UTC, ends at Wed 2023-10-04 02:12:38 UTC. --
	Oct 04 02:10:06 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:10:06.669230    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:10:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:10:17.670812    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:10:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:10:17.701039    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:10:17 default-k8s-diff-port-239802 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:10:17 default-k8s-diff-port-239802 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:10:17 default-k8s-diff-port-239802 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:10:28 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:10:28.670086    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:10:39 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:10:39.668805    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:10:51 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:10:51.670486    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:11:04 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:11:04.670939    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:11:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:11:17.701420    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:11:17 default-k8s-diff-port-239802 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:11:17 default-k8s-diff-port-239802 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:11:17 default-k8s-diff-port-239802 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:11:18 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:11:18.669662    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:11:32 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:11:32.669814    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:11:43 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:11:43.670502    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:11:56 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:11:56.670906    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:12:11 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:11.670056    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:12:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:17.700840    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:12:17 default-k8s-diff-port-239802 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:12:17 default-k8s-diff-port-239802 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:12:17 default-k8s-diff-port-239802 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:12:22 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:22.669884    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:12:33 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:33.670389    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	
	* 
	* ==> storage-provisioner [e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d] <==
	* I1004 02:03:34.376497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:03:34.395267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:03:34.396056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:03:34.412994       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:03:34.415385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-239802_4e4bcb63-1c90-4595-8d45-2dd5c1bb13c2!
	I1004 02:03:34.418485       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bdfec96-db8d-49ca-ab54-6e7d9d62c081", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-239802_4e4bcb63-1c90-4595-8d45-2dd5c1bb13c2 became leader
	I1004 02:03:34.516235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-239802_4e4bcb63-1c90-4595-8d45-2dd5c1bb13c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-c5ww7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 describe pod metrics-server-57f55c9bc5-c5ww7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-239802 describe pod metrics-server-57f55c9bc5-c5ww7: exit status 1 (86.049193ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-c5ww7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-239802 describe pod metrics-server-57f55c9bc5-c5ww7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (405.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-509298 -n embed-certs-509298
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:11:15.806117111 +0000 UTC m=+5268.177148144
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-509298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-509298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.352µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-509298 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-509298 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-509298 logs -n 25: (1.561992881s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-487861             | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-487861                  | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273516                  | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-487861 sudo                              | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-509298                 | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| addons  | enable dashboard -p old-k8s-version-107182             | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:50 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-509298                                  | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-239802  | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC | 04 Oct 23 01:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC |                     |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-239802       | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC | 04 Oct 23 02:03 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 02:09 UTC | 04 Oct 23 02:09 UTC |
	| start   | -p auto-171116 --memory=3072                           | auto-171116                  | jenkins | v1.31.2 | 04 Oct 23 02:09 UTC | 04 Oct 23 02:11 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 02:09 UTC | 04 Oct 23 02:09 UTC |
	| start   | -p kindnet-171116                                      | kindnet-171116               | jenkins | v1.31.2 | 04 Oct 23 02:09 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| ssh     | -p auto-171116 pgrep -a                                | auto-171116                  | jenkins | v1.31.2 | 04 Oct 23 02:11 UTC | 04 Oct 23 02:11 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 02:09:58
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:09:58.830024  173218 out.go:296] Setting OutFile to fd 1 ...
	I1004 02:09:58.830416  173218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:09:58.830438  173218 out.go:309] Setting ErrFile to fd 2...
	I1004 02:09:58.830447  173218 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:09:58.830759  173218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 02:09:58.831675  173218 out.go:303] Setting JSON to false
	I1004 02:09:58.833116  173218 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10350,"bootTime":1696375049,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:09:58.833209  173218 start.go:138] virtualization: kvm guest
	I1004 02:09:58.835684  173218 out.go:177] * [kindnet-171116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:09:58.837271  173218 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 02:09:58.837282  173218 notify.go:220] Checking for updates...
	I1004 02:09:58.838886  173218 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:09:58.840311  173218 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:09:58.841760  173218 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:09:58.843150  173218 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 02:09:58.844827  173218 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:09:58.847095  173218 config.go:182] Loaded profile config "auto-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:09:58.847259  173218 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:09:58.847385  173218 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:09:58.847517  173218 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 02:09:58.887929  173218 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 02:09:58.889475  173218 start.go:298] selected driver: kvm2
	I1004 02:09:58.889494  173218 start.go:902] validating driver "kvm2" against <nil>
	I1004 02:09:58.889511  173218 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:09:58.890480  173218 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:09:58.890572  173218 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:09:58.907679  173218 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 02:09:58.907731  173218 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 02:09:58.908010  173218 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:09:58.908056  173218 cni.go:84] Creating CNI manager for "kindnet"
	I1004 02:09:58.908079  173218 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I1004 02:09:58.908088  173218 start_flags.go:321] config:
	{Name:kindnet-171116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kindnet-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:09:58.908259  173218 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:09:58.910536  173218 out.go:177] * Starting control plane node kindnet-171116 in cluster kindnet-171116
	I1004 02:09:58.502274  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:58.502859  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:58.502889  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:58.502815  172886 retry.go:31] will retry after 3.055206711s: waiting for machine to come up
	I1004 02:09:58.911934  173218 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:09:58.911983  173218 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 02:09:58.911998  173218 cache.go:57] Caching tarball of preloaded images
	I1004 02:09:58.912122  173218 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 02:09:58.912136  173218 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 02:09:58.912247  173218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/config.json ...
	I1004 02:09:58.912271  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/config.json: {Name:mk4083f71c6a5ca823470bb8c4ddee9c9126577c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:09:58.912429  173218 start.go:365] acquiring machines lock for kindnet-171116: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 02:10:01.559192  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:01.559680  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:10:01.559705  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:10:01.559620  172886 retry.go:31] will retry after 2.954536127s: waiting for machine to come up
	I1004 02:10:04.517813  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:04.518420  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:10:04.518458  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:10:04.518395  172886 retry.go:31] will retry after 3.962368262s: waiting for machine to come up
	I1004 02:10:08.482903  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:08.483372  172863 main.go:141] libmachine: (auto-171116) Found IP for machine: 192.168.72.39
	I1004 02:10:08.483407  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has current primary IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:08.483419  172863 main.go:141] libmachine: (auto-171116) Reserving static IP address...
	I1004 02:10:08.483829  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find host DHCP lease matching {name: "auto-171116", mac: "52:54:00:2d:7f:42", ip: "192.168.72.39"} in network mk-auto-171116
	I1004 02:10:08.565530  172863 main.go:141] libmachine: (auto-171116) DBG | Getting to WaitForSSH function...
	I1004 02:10:08.565563  172863 main.go:141] libmachine: (auto-171116) Reserved static IP address: 192.168.72.39
	I1004 02:10:08.565579  172863 main.go:141] libmachine: (auto-171116) Waiting for SSH to be available...
	I1004 02:10:08.568403  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:08.568856  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116
	I1004 02:10:08.568893  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find defined IP address of network mk-auto-171116 interface with MAC address 52:54:00:2d:7f:42
	I1004 02:10:08.569006  172863 main.go:141] libmachine: (auto-171116) DBG | Using SSH client type: external
	I1004 02:10:08.569034  172863 main.go:141] libmachine: (auto-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa (-rw-------)
	I1004 02:10:08.569070  172863 main.go:141] libmachine: (auto-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:10:08.569100  172863 main.go:141] libmachine: (auto-171116) DBG | About to run SSH command:
	I1004 02:10:08.569111  172863 main.go:141] libmachine: (auto-171116) DBG | exit 0
	I1004 02:10:08.572931  172863 main.go:141] libmachine: (auto-171116) DBG | SSH cmd err, output: exit status 255: 
	I1004 02:10:08.572961  172863 main.go:141] libmachine: (auto-171116) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 02:10:08.572971  172863 main.go:141] libmachine: (auto-171116) DBG | command : exit 0
	I1004 02:10:08.572984  172863 main.go:141] libmachine: (auto-171116) DBG | err     : exit status 255
	I1004 02:10:08.572994  172863 main.go:141] libmachine: (auto-171116) DBG | output  : 
	I1004 02:10:13.026977  173218 start.go:369] acquired machines lock for "kindnet-171116" in 14.114488424s
	I1004 02:10:13.027052  173218 start.go:93] Provisioning new machine with config: &{Name:kindnet-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.2 ClusterName:kindnet-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:10:13.027207  173218 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 02:10:13.029600  173218 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1004 02:10:13.029808  173218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:10:13.029871  173218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:10:13.048534  173218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I1004 02:10:13.049034  173218 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:10:13.049667  173218 main.go:141] libmachine: Using API Version  1
	I1004 02:10:13.049696  173218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:10:13.050211  173218 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:10:13.050434  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetMachineName
	I1004 02:10:13.050613  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:13.050797  173218 start.go:159] libmachine.API.Create for "kindnet-171116" (driver="kvm2")
	I1004 02:10:13.050832  173218 client.go:168] LocalClient.Create starting
	I1004 02:10:13.050871  173218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 02:10:13.050912  173218 main.go:141] libmachine: Decoding PEM data...
	I1004 02:10:13.050930  173218 main.go:141] libmachine: Parsing certificate...
	I1004 02:10:13.050984  173218 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 02:10:13.051002  173218 main.go:141] libmachine: Decoding PEM data...
	I1004 02:10:13.051016  173218 main.go:141] libmachine: Parsing certificate...
	I1004 02:10:13.051031  173218 main.go:141] libmachine: Running pre-create checks...
	I1004 02:10:13.051040  173218 main.go:141] libmachine: (kindnet-171116) Calling .PreCreateCheck
	I1004 02:10:13.051452  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetConfigRaw
	I1004 02:10:13.051950  173218 main.go:141] libmachine: Creating machine...
	I1004 02:10:13.051971  173218 main.go:141] libmachine: (kindnet-171116) Calling .Create
	I1004 02:10:13.052130  173218 main.go:141] libmachine: (kindnet-171116) Creating KVM machine...
	I1004 02:10:13.053784  173218 main.go:141] libmachine: (kindnet-171116) DBG | found existing default KVM network
	I1004 02:10:13.055511  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.055318  173328 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:bb:21} reservation:<nil>}
	I1004 02:10:13.056476  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.056352  173328 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c5:97:42} reservation:<nil>}
	I1004 02:10:13.057530  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.057408  173328 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr5 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:b0:63} reservation:<nil>}
	I1004 02:10:13.058807  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.058707  173328 network.go:214] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:2f:38:99} reservation:<nil>}
	I1004 02:10:13.061589  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.061503  173328 network.go:209] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d1e0}
	I1004 02:10:13.068106  173218 main.go:141] libmachine: (kindnet-171116) DBG | trying to create private KVM network mk-kindnet-171116 192.168.83.0/24...
	I1004 02:10:13.152831  173218 main.go:141] libmachine: (kindnet-171116) DBG | private KVM network mk-kindnet-171116 192.168.83.0/24 created
	I1004 02:10:13.152863  173218 main.go:141] libmachine: (kindnet-171116) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116 ...
	I1004 02:10:13.152881  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.152810  173328 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:10:13.152901  173218 main.go:141] libmachine: (kindnet-171116) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 02:10:13.152988  173218 main.go:141] libmachine: (kindnet-171116) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 02:10:13.378857  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.378705  173328 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa...
	I1004 02:10:13.751305  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.751153  173328 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/kindnet-171116.rawdisk...
	I1004 02:10:13.751395  173218 main.go:141] libmachine: (kindnet-171116) DBG | Writing magic tar header
	I1004 02:10:13.751423  173218 main.go:141] libmachine: (kindnet-171116) DBG | Writing SSH key tar header
	I1004 02:10:13.751727  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:13.751597  173328 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116 ...
	I1004 02:10:13.751805  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116
	I1004 02:10:13.751833  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 02:10:13.751844  173218 main.go:141] libmachine: (kindnet-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116 (perms=drwx------)
	I1004 02:10:13.751944  173218 main.go:141] libmachine: (kindnet-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 02:10:13.751971  173218 main.go:141] libmachine: (kindnet-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 02:10:13.751986  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:10:13.752011  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 02:10:13.752026  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 02:10:13.752040  173218 main.go:141] libmachine: (kindnet-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 02:10:13.752060  173218 main.go:141] libmachine: (kindnet-171116) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 02:10:13.752080  173218 main.go:141] libmachine: (kindnet-171116) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 02:10:13.752091  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home/jenkins
	I1004 02:10:13.752107  173218 main.go:141] libmachine: (kindnet-171116) DBG | Checking permissions on dir: /home
	I1004 02:10:13.752118  173218 main.go:141] libmachine: (kindnet-171116) DBG | Skipping /home - not owner
	I1004 02:10:13.752129  173218 main.go:141] libmachine: (kindnet-171116) Creating domain...
	I1004 02:10:13.753407  173218 main.go:141] libmachine: (kindnet-171116) define libvirt domain using xml: 
	I1004 02:10:13.753430  173218 main.go:141] libmachine: (kindnet-171116) <domain type='kvm'>
	I1004 02:10:13.753439  173218 main.go:141] libmachine: (kindnet-171116)   <name>kindnet-171116</name>
	I1004 02:10:13.753448  173218 main.go:141] libmachine: (kindnet-171116)   <memory unit='MiB'>3072</memory>
	I1004 02:10:13.753462  173218 main.go:141] libmachine: (kindnet-171116)   <vcpu>2</vcpu>
	I1004 02:10:13.753478  173218 main.go:141] libmachine: (kindnet-171116)   <features>
	I1004 02:10:13.753489  173218 main.go:141] libmachine: (kindnet-171116)     <acpi/>
	I1004 02:10:13.753500  173218 main.go:141] libmachine: (kindnet-171116)     <apic/>
	I1004 02:10:13.753506  173218 main.go:141] libmachine: (kindnet-171116)     <pae/>
	I1004 02:10:13.753512  173218 main.go:141] libmachine: (kindnet-171116)     
	I1004 02:10:13.753518  173218 main.go:141] libmachine: (kindnet-171116)   </features>
	I1004 02:10:13.753528  173218 main.go:141] libmachine: (kindnet-171116)   <cpu mode='host-passthrough'>
	I1004 02:10:13.753537  173218 main.go:141] libmachine: (kindnet-171116)   
	I1004 02:10:13.753547  173218 main.go:141] libmachine: (kindnet-171116)   </cpu>
	I1004 02:10:13.753559  173218 main.go:141] libmachine: (kindnet-171116)   <os>
	I1004 02:10:13.753575  173218 main.go:141] libmachine: (kindnet-171116)     <type>hvm</type>
	I1004 02:10:13.753590  173218 main.go:141] libmachine: (kindnet-171116)     <boot dev='cdrom'/>
	I1004 02:10:13.753602  173218 main.go:141] libmachine: (kindnet-171116)     <boot dev='hd'/>
	I1004 02:10:13.753621  173218 main.go:141] libmachine: (kindnet-171116)     <bootmenu enable='no'/>
	I1004 02:10:13.753629  173218 main.go:141] libmachine: (kindnet-171116)   </os>
	I1004 02:10:13.753637  173218 main.go:141] libmachine: (kindnet-171116)   <devices>
	I1004 02:10:13.753645  173218 main.go:141] libmachine: (kindnet-171116)     <disk type='file' device='cdrom'>
	I1004 02:10:13.753656  173218 main.go:141] libmachine: (kindnet-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/boot2docker.iso'/>
	I1004 02:10:13.753673  173218 main.go:141] libmachine: (kindnet-171116)       <target dev='hdc' bus='scsi'/>
	I1004 02:10:13.753687  173218 main.go:141] libmachine: (kindnet-171116)       <readonly/>
	I1004 02:10:13.753699  173218 main.go:141] libmachine: (kindnet-171116)     </disk>
	I1004 02:10:13.753714  173218 main.go:141] libmachine: (kindnet-171116)     <disk type='file' device='disk'>
	I1004 02:10:13.753729  173218 main.go:141] libmachine: (kindnet-171116)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 02:10:13.753744  173218 main.go:141] libmachine: (kindnet-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/kindnet-171116.rawdisk'/>
	I1004 02:10:13.753756  173218 main.go:141] libmachine: (kindnet-171116)       <target dev='hda' bus='virtio'/>
	I1004 02:10:13.753810  173218 main.go:141] libmachine: (kindnet-171116)     </disk>
	I1004 02:10:13.753854  173218 main.go:141] libmachine: (kindnet-171116)     <interface type='network'>
	I1004 02:10:13.753870  173218 main.go:141] libmachine: (kindnet-171116)       <source network='mk-kindnet-171116'/>
	I1004 02:10:13.753884  173218 main.go:141] libmachine: (kindnet-171116)       <model type='virtio'/>
	I1004 02:10:13.753898  173218 main.go:141] libmachine: (kindnet-171116)     </interface>
	I1004 02:10:13.753915  173218 main.go:141] libmachine: (kindnet-171116)     <interface type='network'>
	I1004 02:10:13.753928  173218 main.go:141] libmachine: (kindnet-171116)       <source network='default'/>
	I1004 02:10:13.753939  173218 main.go:141] libmachine: (kindnet-171116)       <model type='virtio'/>
	I1004 02:10:13.753953  173218 main.go:141] libmachine: (kindnet-171116)     </interface>
	I1004 02:10:13.753963  173218 main.go:141] libmachine: (kindnet-171116)     <serial type='pty'>
	I1004 02:10:13.753977  173218 main.go:141] libmachine: (kindnet-171116)       <target port='0'/>
	I1004 02:10:13.753990  173218 main.go:141] libmachine: (kindnet-171116)     </serial>
	I1004 02:10:13.754004  173218 main.go:141] libmachine: (kindnet-171116)     <console type='pty'>
	I1004 02:10:13.754018  173218 main.go:141] libmachine: (kindnet-171116)       <target type='serial' port='0'/>
	I1004 02:10:13.754031  173218 main.go:141] libmachine: (kindnet-171116)     </console>
	I1004 02:10:13.754044  173218 main.go:141] libmachine: (kindnet-171116)     <rng model='virtio'>
	I1004 02:10:13.754059  173218 main.go:141] libmachine: (kindnet-171116)       <backend model='random'>/dev/random</backend>
	I1004 02:10:13.754075  173218 main.go:141] libmachine: (kindnet-171116)     </rng>
	I1004 02:10:13.754088  173218 main.go:141] libmachine: (kindnet-171116)     
	I1004 02:10:13.754096  173218 main.go:141] libmachine: (kindnet-171116)     
	I1004 02:10:13.754109  173218 main.go:141] libmachine: (kindnet-171116)   </devices>
	I1004 02:10:13.754122  173218 main.go:141] libmachine: (kindnet-171116) </domain>
	I1004 02:10:13.754137  173218 main.go:141] libmachine: (kindnet-171116) 
	I1004 02:10:13.758850  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:76:cb:2f in network default
	I1004 02:10:13.759494  173218 main.go:141] libmachine: (kindnet-171116) Ensuring networks are active...
	I1004 02:10:13.759532  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:13.760207  173218 main.go:141] libmachine: (kindnet-171116) Ensuring network default is active
	I1004 02:10:13.760532  173218 main.go:141] libmachine: (kindnet-171116) Ensuring network mk-kindnet-171116 is active
	I1004 02:10:13.761250  173218 main.go:141] libmachine: (kindnet-171116) Getting domain xml...
	I1004 02:10:13.762087  173218 main.go:141] libmachine: (kindnet-171116) Creating domain...
	I1004 02:10:11.573421  172863 main.go:141] libmachine: (auto-171116) DBG | Getting to WaitForSSH function...
	I1004 02:10:11.576184  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.576623  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:11.576655  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.576865  172863 main.go:141] libmachine: (auto-171116) DBG | Using SSH client type: external
	I1004 02:10:11.576903  172863 main.go:141] libmachine: (auto-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa (-rw-------)
	I1004 02:10:11.576957  172863 main.go:141] libmachine: (auto-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.39 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:10:11.576982  172863 main.go:141] libmachine: (auto-171116) DBG | About to run SSH command:
	I1004 02:10:11.576993  172863 main.go:141] libmachine: (auto-171116) DBG | exit 0
	I1004 02:10:11.666078  172863 main.go:141] libmachine: (auto-171116) DBG | SSH cmd err, output: <nil>: 
	I1004 02:10:11.666317  172863 main.go:141] libmachine: (auto-171116) KVM machine creation complete!
	I1004 02:10:11.666724  172863 main.go:141] libmachine: (auto-171116) Calling .GetConfigRaw
	I1004 02:10:11.667344  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:11.667559  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:11.667750  172863 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:10:11.667768  172863 main.go:141] libmachine: (auto-171116) Calling .GetState
	I1004 02:10:11.669404  172863 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:10:11.669420  172863 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:10:11.669429  172863 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:10:11.669438  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:11.672205  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.672593  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:11.672622  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.672786  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:11.672955  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:11.673133  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:11.673310  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:11.673503  172863 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:11.673829  172863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 02:10:11.673869  172863 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:10:11.785241  172863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:10:11.785262  172863 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:10:11.785271  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:11.788735  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.789182  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:11.789214  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.789359  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:11.789599  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:11.789809  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:11.790000  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:11.790196  172863 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:11.790511  172863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 02:10:11.790524  172863 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:10:11.907131  172863 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 02:10:11.907201  172863 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:10:11.907212  172863 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:10:11.907223  172863 main.go:141] libmachine: (auto-171116) Calling .GetMachineName
	I1004 02:10:11.907502  172863 buildroot.go:166] provisioning hostname "auto-171116"
	I1004 02:10:11.907545  172863 main.go:141] libmachine: (auto-171116) Calling .GetMachineName
	I1004 02:10:11.907781  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:11.910803  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.911212  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:11.911255  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:11.911413  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:11.911613  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:11.911753  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:11.911930  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:11.912122  172863 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:11.912622  172863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 02:10:11.912644  172863 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-171116 && echo "auto-171116" | sudo tee /etc/hostname
	I1004 02:10:12.039759  172863 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-171116
	
	I1004 02:10:12.039788  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:12.043144  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.043616  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.043650  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.043800  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:12.044029  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.044243  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.044424  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:12.044604  172863 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:12.044983  172863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 02:10:12.045005  172863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-171116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-171116/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-171116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:10:12.171197  172863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:10:12.171231  172863 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 02:10:12.171269  172863 buildroot.go:174] setting up certificates
	I1004 02:10:12.171279  172863 provision.go:83] configureAuth start
	I1004 02:10:12.171307  172863 main.go:141] libmachine: (auto-171116) Calling .GetMachineName
	I1004 02:10:12.171652  172863 main.go:141] libmachine: (auto-171116) Calling .GetIP
	I1004 02:10:12.174619  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.175155  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.175191  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.175451  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:12.177902  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.178251  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.178283  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.178435  172863 provision.go:138] copyHostCerts
	I1004 02:10:12.178486  172863 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 02:10:12.178497  172863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 02:10:12.178560  172863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 02:10:12.178656  172863 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 02:10:12.178664  172863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 02:10:12.178689  172863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 02:10:12.178739  172863 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 02:10:12.178746  172863 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 02:10:12.178771  172863 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 02:10:12.178841  172863 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.auto-171116 san=[192.168.72.39 192.168.72.39 localhost 127.0.0.1 minikube auto-171116]
	I1004 02:10:12.254320  172863 provision.go:172] copyRemoteCerts
	I1004 02:10:12.254388  172863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:10:12.254421  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:12.257460  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.257899  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.257921  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.258240  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:12.258440  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.258640  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:12.258841  172863 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa Username:docker}
	I1004 02:10:12.349938  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 02:10:12.377150  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1004 02:10:12.402785  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 02:10:12.426616  172863 provision.go:86] duration metric: configureAuth took 255.30504ms
	I1004 02:10:12.426644  172863 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:10:12.426867  172863 config.go:182] Loaded profile config "auto-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:10:12.426960  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:12.429833  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.430265  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.430296  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.430513  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:12.430706  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.430878  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.431055  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:12.431264  172863 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:12.431620  172863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 02:10:12.431647  172863 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:10:12.762778  172863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:10:12.762813  172863 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:10:12.762826  172863 main.go:141] libmachine: (auto-171116) Calling .GetURL
	I1004 02:10:12.764358  172863 main.go:141] libmachine: (auto-171116) DBG | Using libvirt version 6000000
	I1004 02:10:12.767292  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.767882  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.767917  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.768155  172863 main.go:141] libmachine: Docker is up and running!
	I1004 02:10:12.768169  172863 main.go:141] libmachine: Reticulating splines...
	I1004 02:10:12.768185  172863 client.go:171] LocalClient.Create took 26.941071115s
	I1004 02:10:12.768211  172863 start.go:167] duration metric: libmachine.API.Create for "auto-171116" took 26.941141994s
	I1004 02:10:12.768223  172863 start.go:300] post-start starting for "auto-171116" (driver="kvm2")
	I1004 02:10:12.768237  172863 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:10:12.768271  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:12.768607  172863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:10:12.768639  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:12.770991  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.771402  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.771426  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.771660  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:12.771889  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.772054  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:12.772279  172863 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa Username:docker}
	I1004 02:10:12.860520  172863 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:10:12.864923  172863 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 02:10:12.864957  172863 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 02:10:12.865037  172863 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 02:10:12.865114  172863 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 02:10:12.865205  172863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 02:10:12.874524  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:10:12.902269  172863 start.go:303] post-start completed in 134.026388ms
	I1004 02:10:12.902324  172863 main.go:141] libmachine: (auto-171116) Calling .GetConfigRaw
	I1004 02:10:12.903056  172863 main.go:141] libmachine: (auto-171116) Calling .GetIP
	I1004 02:10:12.906241  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.906654  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.906688  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.906969  172863 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/config.json ...
	I1004 02:10:12.907192  172863 start.go:128] duration metric: createHost completed in 27.098918158s
	I1004 02:10:12.907220  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:12.909726  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.910148  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:12.910186  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:12.910377  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:12.910600  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.910777  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:12.910917  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:12.911107  172863 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:12.911539  172863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.39 22 <nil> <nil>}
	I1004 02:10:12.911555  172863 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 02:10:13.026778  172863 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696385413.011575241
	
	I1004 02:10:13.026805  172863 fix.go:206] guest clock: 1696385413.011575241
	I1004 02:10:13.026815  172863 fix.go:219] Guest: 2023-10-04 02:10:13.011575241 +0000 UTC Remote: 2023-10-04 02:10:12.907204733 +0000 UTC m=+27.216992343 (delta=104.370508ms)
	I1004 02:10:13.026841  172863 fix.go:190] guest clock delta is within tolerance: 104.370508ms
	I1004 02:10:13.026847  172863 start.go:83] releasing machines lock for "auto-171116", held for 27.218663878s
	I1004 02:10:13.026876  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:13.027234  172863 main.go:141] libmachine: (auto-171116) Calling .GetIP
	I1004 02:10:13.030821  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:13.031309  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:13.031345  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:13.031541  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:13.032160  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:13.032408  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:13.032469  172863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:10:13.032526  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:13.032604  172863 ssh_runner.go:195] Run: cat /version.json
	I1004 02:10:13.032628  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:13.035965  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:13.036116  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:13.036307  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:13.036336  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:13.036564  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:13.036577  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:13.036606  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:13.036797  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:13.036801  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:13.036971  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:13.037030  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:13.037103  172863 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa Username:docker}
	I1004 02:10:13.037188  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:13.037347  172863 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa Username:docker}
	I1004 02:10:13.146608  172863 ssh_runner.go:195] Run: systemctl --version
	I1004 02:10:13.153578  172863 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:10:13.320511  172863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:10:13.326986  172863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:10:13.327064  172863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:10:13.342713  172863 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:10:13.342739  172863 start.go:469] detecting cgroup driver to use...
	I1004 02:10:13.342815  172863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:10:13.359306  172863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:10:13.374865  172863 docker.go:197] disabling cri-docker service (if available) ...
	I1004 02:10:13.374924  172863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:10:13.390300  172863 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:10:13.406038  172863 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:10:13.530374  172863 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:10:13.664692  172863 docker.go:213] disabling docker service ...
	I1004 02:10:13.664788  172863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:10:13.679318  172863 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:10:13.691749  172863 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:10:13.803806  172863 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:10:13.913569  172863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:10:13.927673  172863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:10:13.950287  172863 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 02:10:13.950342  172863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:13.962216  172863 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:10:13.962292  172863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:13.974651  172863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:13.986771  172863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:13.997257  172863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:10:14.008812  172863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:10:14.017925  172863 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:10:14.017986  172863 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:10:14.031210  172863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:10:14.041299  172863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:10:14.145411  172863 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:10:14.330375  172863 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:10:14.330463  172863 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:10:14.335858  172863 start.go:537] Will wait 60s for crictl version
	I1004 02:10:14.335920  172863 ssh_runner.go:195] Run: which crictl
	I1004 02:10:14.339825  172863 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:10:14.386676  172863 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 02:10:14.386749  172863 ssh_runner.go:195] Run: crio --version
	I1004 02:10:14.453453  172863 ssh_runner.go:195] Run: crio --version
	I1004 02:10:14.505654  172863 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 02:10:14.507276  172863 main.go:141] libmachine: (auto-171116) Calling .GetIP
	I1004 02:10:14.510417  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:14.510939  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:14.510961  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:14.511247  172863 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 02:10:14.515798  172863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:10:14.532596  172863 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:10:14.532678  172863 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:10:14.575497  172863 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 02:10:14.575586  172863 ssh_runner.go:195] Run: which lz4
	I1004 02:10:14.580064  172863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 02:10:14.585112  172863 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:10:14.585147  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 02:10:15.297995  173218 main.go:141] libmachine: (kindnet-171116) Waiting to get IP...
	I1004 02:10:15.299302  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:15.300222  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:15.300327  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:15.300232  173328 retry.go:31] will retry after 262.458251ms: waiting for machine to come up
	I1004 02:10:15.565097  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:15.566183  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:15.566215  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:15.566088  173328 retry.go:31] will retry after 279.916647ms: waiting for machine to come up
	I1004 02:10:15.847769  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:15.848329  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:15.848354  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:15.848307  173328 retry.go:31] will retry after 358.789793ms: waiting for machine to come up
	I1004 02:10:16.209025  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:16.209620  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:16.209654  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:16.209504  173328 retry.go:31] will retry after 541.551116ms: waiting for machine to come up
	I1004 02:10:16.752553  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:16.753072  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:16.753118  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:16.753005  173328 retry.go:31] will retry after 757.228028ms: waiting for machine to come up
	I1004 02:10:17.511999  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:17.512604  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:17.512630  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:17.512520  173328 retry.go:31] will retry after 709.542677ms: waiting for machine to come up
	I1004 02:10:18.223949  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:18.224547  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:18.224576  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:18.224495  173328 retry.go:31] will retry after 1.152896257s: waiting for machine to come up
	I1004 02:10:16.559382  172863 crio.go:444] Took 1.979353 seconds to copy over tarball
	I1004 02:10:16.559467  172863 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:10:19.904879  172863 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.345382492s)
	I1004 02:10:19.904916  172863 crio.go:451] Took 3.345504 seconds to extract the tarball
	I1004 02:10:19.904928  172863 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:10:19.948221  172863 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:10:20.012571  172863 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 02:10:20.012600  172863 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:10:20.012682  172863 ssh_runner.go:195] Run: crio config
	I1004 02:10:20.096327  172863 cni.go:84] Creating CNI manager for ""
	I1004 02:10:20.096347  172863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:10:20.096365  172863 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 02:10:20.096382  172863 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.39 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-171116 NodeName:auto-171116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.39"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.39 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:10:20.096513  172863 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.39
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-171116"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.39
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.39"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:10:20.096579  172863 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-171116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.39
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:auto-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1004 02:10:20.096627  172863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 02:10:20.110372  172863 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:10:20.110458  172863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:10:20.120777  172863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (370 bytes)
	I1004 02:10:20.144000  172863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:10:20.163521  172863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2095 bytes)
	I1004 02:10:20.181538  172863 ssh_runner.go:195] Run: grep 192.168.72.39	control-plane.minikube.internal$ /etc/hosts
	I1004 02:10:20.186670  172863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.39	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:10:20.201771  172863 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116 for IP: 192.168.72.39
	I1004 02:10:20.201813  172863 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:20.202003  172863 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 02:10:20.202057  172863 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 02:10:20.202124  172863 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/client.key
	I1004 02:10:20.202146  172863 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/client.crt with IP's: []
	I1004 02:10:20.339437  172863 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/client.crt ...
	I1004 02:10:20.339471  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/client.crt: {Name:mk9774181bef071f5bd27b01dbb7f1035f86b064 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:20.339669  172863 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/client.key ...
	I1004 02:10:20.339684  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/client.key: {Name:mkc2b565c4ff8fbd7e3e20a7454131fafb49929f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:20.339766  172863 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.key.8cee03cf
	I1004 02:10:20.339779  172863 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.crt.8cee03cf with IP's: [192.168.72.39 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 02:10:20.559527  172863 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.crt.8cee03cf ...
	I1004 02:10:20.559562  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.crt.8cee03cf: {Name:mk49dd110b1c97b83d5cda417591b7405650cacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:20.559752  172863 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.key.8cee03cf ...
	I1004 02:10:20.559768  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.key.8cee03cf: {Name:mk2352d1f38a92a8af9afe35361dc30abd8f539e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:20.559865  172863 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.crt.8cee03cf -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.crt
	I1004 02:10:20.559952  172863 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.key.8cee03cf -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.key
	I1004 02:10:20.560025  172863 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.key
	I1004 02:10:20.560045  172863 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.crt with IP's: []
	I1004 02:10:20.759015  172863 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.crt ...
	I1004 02:10:21.011328  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.crt: {Name:mkb9419afeb757f3d744a66fab24324b29a1d1ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:21.011571  172863 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.key ...
	I1004 02:10:21.011591  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.key: {Name:mkd233482ab2639561f0e3df19714409ebc8fd64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:21.011837  172863 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 02:10:21.011897  172863 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 02:10:21.011915  172863 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 02:10:21.011948  172863 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 02:10:21.011983  172863 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:10:21.012017  172863 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 02:10:21.012072  172863 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:10:21.012872  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 02:10:21.044864  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:10:21.072418  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:10:21.100829  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 02:10:21.132026  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:10:21.158220  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:10:21.183903  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:10:21.210789  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:10:21.238958  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 02:10:21.265030  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:10:21.292962  172863 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 02:10:21.320805  172863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:10:21.338928  172863 ssh_runner.go:195] Run: openssl version
	I1004 02:10:21.344803  172863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 02:10:21.355848  172863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 02:10:21.361378  172863 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 02:10:21.361447  172863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 02:10:21.367628  172863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 02:10:21.378774  172863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 02:10:21.389792  172863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 02:10:21.395384  172863 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 02:10:21.395456  172863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 02:10:21.402290  172863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 02:10:21.413269  172863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:10:21.423915  172863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:10:21.428993  172863 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:10:21.429083  172863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:10:21.436237  172863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:10:21.447717  172863 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 02:10:21.452401  172863 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 02:10:21.452465  172863 kubeadm.go:404] StartCluster: {Name:auto-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.2 ClusterName:auto-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.39 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:10:21.452560  172863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:10:21.452614  172863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:10:21.495784  172863 cri.go:89] found id: ""
	I1004 02:10:21.495866  172863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:10:21.506669  172863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:10:21.516201  172863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:10:21.525721  172863 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:10:21.525777  172863 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:10:21.583129  172863 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:10:21.583229  172863 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:10:21.739888  172863 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:10:21.740033  172863 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:10:21.740165  172863 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:10:22.053367  172863 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:10:19.378837  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:19.379457  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:19.379488  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:19.379398  173328 retry.go:31] will retry after 1.482244973s: waiting for machine to come up
	I1004 02:10:20.863280  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:20.863763  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:20.863789  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:20.863712  173328 retry.go:31] will retry after 1.378593167s: waiting for machine to come up
	I1004 02:10:22.243897  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:22.244337  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:22.244370  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:22.244289  173328 retry.go:31] will retry after 2.286300306s: waiting for machine to come up
	I1004 02:10:22.055654  172863 out.go:204]   - Generating certificates and keys ...
	I1004 02:10:22.055782  172863 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:10:22.055871  172863 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:10:22.260559  172863 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:10:22.419575  172863 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:10:22.592007  172863 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:10:22.845015  172863 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 02:10:23.147656  172863 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 02:10:23.147890  172863 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-171116 localhost] and IPs [192.168.72.39 127.0.0.1 ::1]
	I1004 02:10:23.231029  172863 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 02:10:23.231252  172863 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-171116 localhost] and IPs [192.168.72.39 127.0.0.1 ::1]
	I1004 02:10:23.528095  172863 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:10:23.638790  172863 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:10:23.765007  172863 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 02:10:23.765124  172863 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:10:23.933553  172863 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:10:24.023831  172863 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:10:24.197575  172863 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:10:24.378147  172863 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:10:24.379061  172863 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:10:24.381713  172863 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:10:24.383771  172863 out.go:204]   - Booting up control plane ...
	I1004 02:10:24.383913  172863 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:10:24.384017  172863 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:10:24.384806  172863 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:10:24.409866  172863 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:10:24.414575  172863 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:10:24.414734  172863 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:10:24.565385  172863 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:10:24.532705  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:24.533370  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:24.533408  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:24.533298  173328 retry.go:31] will retry after 2.537698629s: waiting for machine to come up
	I1004 02:10:27.072470  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:27.073143  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:27.073173  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:27.073085  173328 retry.go:31] will retry after 3.568881298s: waiting for machine to come up
	I1004 02:10:32.066662  172863 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503116 seconds
	I1004 02:10:32.066788  172863 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:10:32.084361  172863 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:10:32.618684  172863 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:10:32.618951  172863 kubeadm.go:322] [mark-control-plane] Marking the node auto-171116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:10:33.135686  172863 kubeadm.go:322] [bootstrap-token] Using token: bj5rn0.zji2eygvm79dq85t
	I1004 02:10:33.137403  172863 out.go:204]   - Configuring RBAC rules ...
	I1004 02:10:33.137563  172863 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:10:33.146367  172863 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:10:33.155165  172863 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:10:33.161688  172863 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:10:33.166302  172863 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:10:33.178429  172863 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:10:33.205396  172863 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:10:33.494198  172863 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:10:33.553963  172863 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:10:33.554976  172863 kubeadm.go:322] 
	I1004 02:10:33.555063  172863 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:10:33.555085  172863 kubeadm.go:322] 
	I1004 02:10:33.555198  172863 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:10:33.555210  172863 kubeadm.go:322] 
	I1004 02:10:33.555242  172863 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:10:33.555314  172863 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:10:33.555386  172863 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:10:33.555419  172863 kubeadm.go:322] 
	I1004 02:10:33.555513  172863 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:10:33.555525  172863 kubeadm.go:322] 
	I1004 02:10:33.555575  172863 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:10:33.555598  172863 kubeadm.go:322] 
	I1004 02:10:33.555673  172863 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:10:33.555769  172863 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:10:33.555860  172863 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:10:33.555870  172863 kubeadm.go:322] 
	I1004 02:10:33.555985  172863 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:10:33.556098  172863 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:10:33.556106  172863 kubeadm.go:322] 
	I1004 02:10:33.556202  172863 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token bj5rn0.zji2eygvm79dq85t \
	I1004 02:10:33.556324  172863 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:10:33.556356  172863 kubeadm.go:322] 	--control-plane 
	I1004 02:10:33.556366  172863 kubeadm.go:322] 
	I1004 02:10:33.556461  172863 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:10:33.556474  172863 kubeadm.go:322] 
	I1004 02:10:33.556591  172863 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token bj5rn0.zji2eygvm79dq85t \
	I1004 02:10:33.556685  172863 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:10:33.557362  172863 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:10:33.557394  172863 cni.go:84] Creating CNI manager for ""
	I1004 02:10:33.557404  172863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:10:33.559298  172863 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:10:30.644112  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:30.644579  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:30.644632  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:30.644521  173328 retry.go:31] will retry after 2.999525312s: waiting for machine to come up
	I1004 02:10:33.646209  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:33.646741  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find current IP address of domain kindnet-171116 in network mk-kindnet-171116
	I1004 02:10:33.646774  173218 main.go:141] libmachine: (kindnet-171116) DBG | I1004 02:10:33.646687  173328 retry.go:31] will retry after 3.801596654s: waiting for machine to come up
	I1004 02:10:33.560841  172863 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:10:33.585727  172863 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 02:10:33.603215  172863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:10:33.603304  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=auto-171116 minikube.k8s.io/updated_at=2023_10_04T02_10_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:33.603329  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:33.859457  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:33.859510  172863 ops.go:34] apiserver oom_adj: -16
	I1004 02:10:34.058082  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:34.693316  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:35.193668  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:35.693269  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:37.451079  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:37.451540  173218 main.go:141] libmachine: (kindnet-171116) Found IP for machine: 192.168.83.126
	I1004 02:10:37.451568  173218 main.go:141] libmachine: (kindnet-171116) Reserving static IP address...
	I1004 02:10:37.451594  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has current primary IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:37.451955  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find host DHCP lease matching {name: "kindnet-171116", mac: "52:54:00:28:9b:7b", ip: "192.168.83.126"} in network mk-kindnet-171116
	I1004 02:10:37.531502  173218 main.go:141] libmachine: (kindnet-171116) DBG | Getting to WaitForSSH function...
	I1004 02:10:37.531540  173218 main.go:141] libmachine: (kindnet-171116) Reserved static IP address: 192.168.83.126
	I1004 02:10:37.531555  173218 main.go:141] libmachine: (kindnet-171116) Waiting for SSH to be available...
	I1004 02:10:37.534304  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:37.534683  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116
	I1004 02:10:37.534721  173218 main.go:141] libmachine: (kindnet-171116) DBG | unable to find defined IP address of network mk-kindnet-171116 interface with MAC address 52:54:00:28:9b:7b
	I1004 02:10:37.534861  173218 main.go:141] libmachine: (kindnet-171116) DBG | Using SSH client type: external
	I1004 02:10:37.534891  173218 main.go:141] libmachine: (kindnet-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa (-rw-------)
	I1004 02:10:37.534947  173218 main.go:141] libmachine: (kindnet-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:10:37.534975  173218 main.go:141] libmachine: (kindnet-171116) DBG | About to run SSH command:
	I1004 02:10:37.535014  173218 main.go:141] libmachine: (kindnet-171116) DBG | exit 0
	I1004 02:10:37.538609  173218 main.go:141] libmachine: (kindnet-171116) DBG | SSH cmd err, output: exit status 255: 
	I1004 02:10:37.538637  173218 main.go:141] libmachine: (kindnet-171116) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1004 02:10:37.538648  173218 main.go:141] libmachine: (kindnet-171116) DBG | command : exit 0
	I1004 02:10:37.538658  173218 main.go:141] libmachine: (kindnet-171116) DBG | err     : exit status 255
	I1004 02:10:37.538674  173218 main.go:141] libmachine: (kindnet-171116) DBG | output  : 
	I1004 02:10:36.192761  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:36.693292  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:37.192910  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:37.692727  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:38.192893  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:38.693206  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:39.193043  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:39.693370  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:40.192760  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:40.692723  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:40.540993  173218 main.go:141] libmachine: (kindnet-171116) DBG | Getting to WaitForSSH function...
	I1004 02:10:40.544010  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.544475  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:40.544504  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.544665  173218 main.go:141] libmachine: (kindnet-171116) DBG | Using SSH client type: external
	I1004 02:10:40.544701  173218 main.go:141] libmachine: (kindnet-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa (-rw-------)
	I1004 02:10:40.544771  173218 main.go:141] libmachine: (kindnet-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.126 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:10:40.544804  173218 main.go:141] libmachine: (kindnet-171116) DBG | About to run SSH command:
	I1004 02:10:40.544822  173218 main.go:141] libmachine: (kindnet-171116) DBG | exit 0
	I1004 02:10:40.633825  173218 main.go:141] libmachine: (kindnet-171116) DBG | SSH cmd err, output: <nil>: 
	I1004 02:10:40.634144  173218 main.go:141] libmachine: (kindnet-171116) KVM machine creation complete!
	I1004 02:10:40.634511  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetConfigRaw
	I1004 02:10:40.635134  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:40.635422  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:40.635627  173218 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:10:40.635644  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetState
	I1004 02:10:40.637204  173218 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:10:40.637225  173218 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:10:40.637235  173218 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:10:40.637245  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:40.639786  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.640246  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:40.640283  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.640412  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:40.640607  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:40.640799  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:40.640965  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:40.641190  173218 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:40.641557  173218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.83.126 22 <nil> <nil>}
	I1004 02:10:40.641581  173218 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:10:40.758228  173218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:10:40.758260  173218 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:10:40.758272  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:40.761522  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.762052  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:40.762096  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.762283  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:40.762510  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:40.762687  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:40.762867  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:40.763108  173218 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:40.763539  173218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.83.126 22 <nil> <nil>}
	I1004 02:10:40.763567  173218 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:10:40.887082  173218 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 02:10:40.887163  173218 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:10:40.887185  173218 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:10:40.887204  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetMachineName
	I1004 02:10:40.887547  173218 buildroot.go:166] provisioning hostname "kindnet-171116"
	I1004 02:10:40.887572  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetMachineName
	I1004 02:10:40.887764  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:40.890785  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.891203  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:40.891237  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:40.891419  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:40.891606  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:40.891774  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:40.891917  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:40.892092  173218 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:40.892410  173218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.83.126 22 <nil> <nil>}
	I1004 02:10:40.892424  173218 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-171116 && echo "kindnet-171116" | sudo tee /etc/hostname
	I1004 02:10:41.023039  173218 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-171116
	
	I1004 02:10:41.023067  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:41.026416  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.026806  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.026853  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.027049  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:41.027296  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.027494  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.027660  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:41.027861  173218 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:41.028248  173218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.83.126 22 <nil> <nil>}
	I1004 02:10:41.028269  173218 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-171116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-171116/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-171116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:10:41.160227  173218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:10:41.160260  173218 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 02:10:41.160309  173218 buildroot.go:174] setting up certificates
	I1004 02:10:41.160319  173218 provision.go:83] configureAuth start
	I1004 02:10:41.160334  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetMachineName
	I1004 02:10:41.160596  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetIP
	I1004 02:10:41.163515  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.163901  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.163943  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.164068  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:41.166587  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.167014  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.167044  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.167242  173218 provision.go:138] copyHostCerts
	I1004 02:10:41.167309  173218 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 02:10:41.167330  173218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 02:10:41.167399  173218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 02:10:41.167548  173218 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 02:10:41.167563  173218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 02:10:41.167602  173218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 02:10:41.167688  173218 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 02:10:41.167700  173218 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 02:10:41.167724  173218 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 02:10:41.167772  173218 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.kindnet-171116 san=[192.168.83.126 192.168.83.126 localhost 127.0.0.1 minikube kindnet-171116]
	I1004 02:10:41.254697  173218 provision.go:172] copyRemoteCerts
	I1004 02:10:41.254774  173218 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:10:41.254823  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:41.258319  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.258811  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.258855  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.259002  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:41.259212  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.259433  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:41.259592  173218 sshutil.go:53] new ssh client: &{IP:192.168.83.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa Username:docker}
	I1004 02:10:41.353394  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 02:10:41.379487  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 02:10:41.403826  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 02:10:41.427418  173218 provision.go:86] duration metric: configureAuth took 267.081331ms
	I1004 02:10:41.427451  173218 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:10:41.427672  173218 config.go:182] Loaded profile config "kindnet-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:10:41.427766  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:41.430635  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.431096  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.431131  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.431397  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:41.431619  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.431807  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.432063  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:41.432272  173218 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:41.432594  173218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.83.126 22 <nil> <nil>}
	I1004 02:10:41.432614  173218 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:10:41.773042  173218 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:10:41.773092  173218 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:10:41.773104  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetURL
	I1004 02:10:41.774816  173218 main.go:141] libmachine: (kindnet-171116) DBG | Using libvirt version 6000000
	I1004 02:10:41.777828  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.778257  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.778302  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.778459  173218 main.go:141] libmachine: Docker is up and running!
	I1004 02:10:41.778478  173218 main.go:141] libmachine: Reticulating splines...
	I1004 02:10:41.778488  173218 client.go:171] LocalClient.Create took 28.727642867s
	I1004 02:10:41.778515  173218 start.go:167] duration metric: libmachine.API.Create for "kindnet-171116" took 28.727720222s
	I1004 02:10:41.778532  173218 start.go:300] post-start starting for "kindnet-171116" (driver="kvm2")
	I1004 02:10:41.778544  173218 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:10:41.778568  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:41.778890  173218 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:10:41.778925  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:41.781008  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.781369  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.781398  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.781559  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:41.781769  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.781991  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:41.782164  173218 sshutil.go:53] new ssh client: &{IP:192.168.83.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa Username:docker}
	I1004 02:10:41.872631  173218 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:10:41.876904  173218 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 02:10:41.876931  173218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 02:10:41.876997  173218 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 02:10:41.877066  173218 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 02:10:41.877173  173218 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 02:10:41.886648  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:10:41.910515  173218 start.go:303] post-start completed in 131.942764ms
	I1004 02:10:41.910577  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetConfigRaw
	I1004 02:10:41.911272  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetIP
	I1004 02:10:41.914266  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.914564  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.914591  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.914860  173218 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/config.json ...
	I1004 02:10:41.915068  173218 start.go:128] duration metric: createHost completed in 28.887844224s
	I1004 02:10:41.915091  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:41.917585  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.917903  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:41.917936  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:41.918094  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:41.918287  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.918475  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:41.918637  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:41.918831  173218 main.go:141] libmachine: Using SSH client type: native
	I1004 02:10:41.919152  173218 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.83.126 22 <nil> <nil>}
	I1004 02:10:41.919167  173218 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 02:10:42.035434  173218 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696385442.018385101
	
	I1004 02:10:42.035461  173218 fix.go:206] guest clock: 1696385442.018385101
	I1004 02:10:42.035470  173218 fix.go:219] Guest: 2023-10-04 02:10:42.018385101 +0000 UTC Remote: 2023-10-04 02:10:41.915078874 +0000 UTC m=+43.129098396 (delta=103.306227ms)
	I1004 02:10:42.035487  173218 fix.go:190] guest clock delta is within tolerance: 103.306227ms
	I1004 02:10:42.035492  173218 start.go:83] releasing machines lock for "kindnet-171116", held for 29.008484601s
	I1004 02:10:42.035516  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:42.035815  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetIP
	I1004 02:10:42.038747  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:42.039109  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:42.039145  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:42.039316  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:42.039996  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:42.040162  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:10:42.040227  173218 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:10:42.040277  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:42.040383  173218 ssh_runner.go:195] Run: cat /version.json
	I1004 02:10:42.040415  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHHostname
	I1004 02:10:42.043089  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:42.043354  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:42.043484  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:42.043543  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:42.043676  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:42.043796  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:42.043829  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:42.044041  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHPort
	I1004 02:10:42.044041  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:42.044236  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:42.044288  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHKeyPath
	I1004 02:10:42.044426  173218 sshutil.go:53] new ssh client: &{IP:192.168.83.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa Username:docker}
	I1004 02:10:42.044452  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetSSHUsername
	I1004 02:10:42.044587  173218 sshutil.go:53] new ssh client: &{IP:192.168.83.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/kindnet-171116/id_rsa Username:docker}
	I1004 02:10:42.150980  173218 ssh_runner.go:195] Run: systemctl --version
	I1004 02:10:42.157112  173218 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:10:42.320808  173218 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:10:42.327992  173218 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:10:42.328077  173218 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:10:42.344640  173218 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:10:42.344669  173218 start.go:469] detecting cgroup driver to use...
	I1004 02:10:42.344761  173218 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:10:42.362305  173218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:10:42.374969  173218 docker.go:197] disabling cri-docker service (if available) ...
	I1004 02:10:42.375027  173218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:10:42.388383  173218 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:10:42.402041  173218 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:10:42.503512  173218 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:10:42.633264  173218 docker.go:213] disabling docker service ...
	I1004 02:10:42.633335  173218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:10:42.648742  173218 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:10:42.661372  173218 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:10:42.792122  173218 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:10:42.910714  173218 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:10:42.925502  173218 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:10:42.944026  173218 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 02:10:42.944103  173218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:42.954429  173218 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:10:42.954502  173218 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:42.964510  173218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:42.974407  173218 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:10:42.984369  173218 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:10:42.994439  173218 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:10:43.003553  173218 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:10:43.003622  173218 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:10:43.018362  173218 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:10:43.027742  173218 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:10:43.146942  173218 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:10:43.332242  173218 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:10:43.332328  173218 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:10:43.342931  173218 start.go:537] Will wait 60s for crictl version
	I1004 02:10:43.342998  173218 ssh_runner.go:195] Run: which crictl
	I1004 02:10:43.347290  173218 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:10:43.394717  173218 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 02:10:43.394830  173218 ssh_runner.go:195] Run: crio --version
	I1004 02:10:43.446530  173218 ssh_runner.go:195] Run: crio --version
	I1004 02:10:43.498938  173218 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 02:10:43.500440  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetIP
	I1004 02:10:43.503434  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:43.503833  173218 main.go:141] libmachine: (kindnet-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:28:9b:7b", ip: ""} in network mk-kindnet-171116: {Iface:virbr6 ExpiryTime:2023-10-04 03:10:31 +0000 UTC Type:0 Mac:52:54:00:28:9b:7b Iaid: IPaddr:192.168.83.126 Prefix:24 Hostname:kindnet-171116 Clientid:01:52:54:00:28:9b:7b}
	I1004 02:10:43.503869  173218 main.go:141] libmachine: (kindnet-171116) DBG | domain kindnet-171116 has defined IP address 192.168.83.126 and MAC address 52:54:00:28:9b:7b in network mk-kindnet-171116
	I1004 02:10:43.504057  173218 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1004 02:10:43.508439  173218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:10:43.522938  173218 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:10:43.522994  173218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:10:43.562482  173218 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 02:10:43.562558  173218 ssh_runner.go:195] Run: which lz4
	I1004 02:10:43.566686  173218 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 02:10:43.571138  173218 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:10:43.571167  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 02:10:41.192724  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:41.693062  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:42.192762  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:42.692790  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:43.193133  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:43.692680  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:44.193597  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:44.692773  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:45.193230  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:45.692767  172863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:10:45.825704  172863 kubeadm.go:1081] duration metric: took 12.222478291s to wait for elevateKubeSystemPrivileges.
	I1004 02:10:45.825730  172863 kubeadm.go:406] StartCluster complete in 24.373271218s
	I1004 02:10:45.825748  172863 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:45.825816  172863 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:10:45.827604  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:45.827915  172863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:10:45.828033  172863 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:10:45.828093  172863 addons.go:69] Setting storage-provisioner=true in profile "auto-171116"
	I1004 02:10:45.828112  172863 addons.go:231] Setting addon storage-provisioner=true in "auto-171116"
	I1004 02:10:45.828147  172863 config.go:182] Loaded profile config "auto-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:10:45.828179  172863 host.go:66] Checking if "auto-171116" exists ...
	I1004 02:10:45.828212  172863 addons.go:69] Setting default-storageclass=true in profile "auto-171116"
	I1004 02:10:45.828235  172863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-171116"
	I1004 02:10:45.828637  172863 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:10:45.828655  172863 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:10:45.828665  172863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:10:45.828679  172863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:10:45.849643  172863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42853
	I1004 02:10:45.849650  172863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46569
	I1004 02:10:45.850246  172863 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:10:45.850351  172863 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:10:45.850796  172863 main.go:141] libmachine: Using API Version  1
	I1004 02:10:45.850818  172863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:10:45.851013  172863 main.go:141] libmachine: Using API Version  1
	I1004 02:10:45.851045  172863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:10:45.851155  172863 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:10:45.851378  172863 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:10:45.851767  172863 main.go:141] libmachine: (auto-171116) Calling .GetState
	I1004 02:10:45.851780  172863 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:10:45.851810  172863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:10:45.855256  172863 addons.go:231] Setting addon default-storageclass=true in "auto-171116"
	I1004 02:10:45.855304  172863 host.go:66] Checking if "auto-171116" exists ...
	I1004 02:10:45.855661  172863 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:10:45.855684  172863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:10:45.872029  172863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I1004 02:10:45.872238  172863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I1004 02:10:45.872603  172863 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:10:45.872807  172863 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:10:45.873125  172863 main.go:141] libmachine: Using API Version  1
	I1004 02:10:45.873147  172863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:10:45.873496  172863 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:10:45.873668  172863 main.go:141] libmachine: Using API Version  1
	I1004 02:10:45.873689  172863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:10:45.873765  172863 main.go:141] libmachine: (auto-171116) Calling .GetState
	I1004 02:10:45.876853  172863 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:10:45.876876  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:45.879089  172863 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:10:45.877418  172863 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:10:45.880749  172863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:10:45.880881  172863 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:10:45.880900  172863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:10:45.880931  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:45.884991  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:45.886140  172863 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-171116" context rescaled to 1 replicas
	I1004 02:10:45.886165  172863 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.39 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:10:45.888036  172863 out.go:177] * Verifying Kubernetes components...
	I1004 02:10:45.496282  173218 crio.go:444] Took 1.929626 seconds to copy over tarball
	I1004 02:10:45.496394  173218 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:10:45.886242  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:45.886629  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:45.889865  172863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:10:45.889962  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:45.890822  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:45.891057  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:45.891244  172863 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa Username:docker}
	I1004 02:10:45.901081  172863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41523
	I1004 02:10:45.901553  172863 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:10:45.902136  172863 main.go:141] libmachine: Using API Version  1
	I1004 02:10:45.902162  172863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:10:45.902544  172863 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:10:45.902779  172863 main.go:141] libmachine: (auto-171116) Calling .GetState
	I1004 02:10:45.904545  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:10:45.904819  172863 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:10:45.904837  172863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:10:45.904858  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHHostname
	I1004 02:10:45.911424  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:45.911898  172863 main.go:141] libmachine: (auto-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:7f:42", ip: ""} in network mk-auto-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:10:02 +0000 UTC Type:0 Mac:52:54:00:2d:7f:42 Iaid: IPaddr:192.168.72.39 Prefix:24 Hostname:auto-171116 Clientid:01:52:54:00:2d:7f:42}
	I1004 02:10:45.911923  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined IP address 192.168.72.39 and MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:10:45.912246  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHPort
	I1004 02:10:45.912510  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHKeyPath
	I1004 02:10:45.912669  172863 main.go:141] libmachine: (auto-171116) Calling .GetSSHUsername
	I1004 02:10:45.912960  172863 sshutil.go:53] new ssh client: &{IP:192.168.72.39 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa Username:docker}
	I1004 02:10:46.108902  172863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:10:46.110723  172863 node_ready.go:35] waiting up to 15m0s for node "auto-171116" to be "Ready" ...
	I1004 02:10:46.146116  172863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:10:46.147159  172863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:10:46.184785  172863 node_ready.go:49] node "auto-171116" has status "Ready":"True"
	I1004 02:10:46.184816  172863 node_ready.go:38] duration metric: took 74.005631ms waiting for node "auto-171116" to be "Ready" ...
	I1004 02:10:46.184829  172863 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:10:46.264811  172863 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace to be "Ready" ...
	I1004 02:10:47.959316  172863 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.850303058s)
	I1004 02:10:47.959361  172863 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1004 02:10:49.120829  172863 pod_ready.go:102] pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace has status "Ready":"False"
	I1004 02:10:51.270035  172863 pod_ready.go:102] pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace has status "Ready":"False"
	I1004 02:10:51.580819  172863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.434617195s)
	I1004 02:10:51.580860  172863 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.433670342s)
	I1004 02:10:51.580885  172863 main.go:141] libmachine: Making call to close driver server
	I1004 02:10:51.580898  172863 main.go:141] libmachine: Making call to close driver server
	I1004 02:10:51.580904  172863 main.go:141] libmachine: (auto-171116) Calling .Close
	I1004 02:10:51.580912  172863 main.go:141] libmachine: (auto-171116) Calling .Close
	I1004 02:10:51.581411  172863 main.go:141] libmachine: (auto-171116) DBG | Closing plugin on server side
	I1004 02:10:51.581429  172863 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:10:51.581446  172863 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:10:51.581457  172863 main.go:141] libmachine: Making call to close driver server
	I1004 02:10:51.581467  172863 main.go:141] libmachine: (auto-171116) DBG | Closing plugin on server side
	I1004 02:10:51.581474  172863 main.go:141] libmachine: (auto-171116) Calling .Close
	I1004 02:10:51.581706  172863 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:10:51.581723  172863 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:10:51.581753  172863 main.go:141] libmachine: (auto-171116) DBG | Closing plugin on server side
	I1004 02:10:51.583046  172863 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:10:51.583071  172863 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:10:51.583083  172863 main.go:141] libmachine: Making call to close driver server
	I1004 02:10:51.583094  172863 main.go:141] libmachine: (auto-171116) Calling .Close
	I1004 02:10:51.583367  172863 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:10:51.583383  172863 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:10:51.613194  172863 main.go:141] libmachine: Making call to close driver server
	I1004 02:10:51.613225  172863 main.go:141] libmachine: (auto-171116) Calling .Close
	I1004 02:10:51.613561  172863 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:10:51.613584  172863 main.go:141] libmachine: (auto-171116) DBG | Closing plugin on server side
	I1004 02:10:51.613589  172863 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:10:51.615786  172863 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1004 02:10:48.985773  173218 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.489344744s)
	I1004 02:10:48.985800  173218 crio.go:451] Took 3.489485 seconds to extract the tarball
	I1004 02:10:48.985809  173218 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:10:49.028628  173218 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:10:49.095220  173218 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 02:10:49.095250  173218 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:10:49.095319  173218 ssh_runner.go:195] Run: crio config
	I1004 02:10:49.155499  173218 cni.go:84] Creating CNI manager for "kindnet"
	I1004 02:10:49.155538  173218 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 02:10:49.155563  173218 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.126 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-171116 NodeName:kindnet-171116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.126"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.126 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:10:49.155716  173218 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.126
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-171116"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.126
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.126"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:10:49.155794  173218 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kindnet-171116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.126
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:kindnet-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I1004 02:10:49.155852  173218 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 02:10:49.165081  173218 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:10:49.165158  173218 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:10:49.174598  173218 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1004 02:10:49.193912  173218 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:10:49.212037  173218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I1004 02:10:49.230274  173218 ssh_runner.go:195] Run: grep 192.168.83.126	control-plane.minikube.internal$ /etc/hosts
	I1004 02:10:49.234407  173218 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.126	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:10:49.247756  173218 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116 for IP: 192.168.83.126
	I1004 02:10:49.247803  173218 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.247965  173218 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 02:10:49.248040  173218 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 02:10:49.248107  173218 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/client.key
	I1004 02:10:49.248124  173218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/client.crt with IP's: []
	I1004 02:10:49.394869  173218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/client.crt ...
	I1004 02:10:49.394901  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/client.crt: {Name:mk576e0a6566ee28c0bbb819b63d2f0dce925ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.395097  173218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/client.key ...
	I1004 02:10:49.395122  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/client.key: {Name:mk15a17d3924b143b2bb0adad6e67717d912e924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.395264  173218 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.key.b74b553d
	I1004 02:10:49.395295  173218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.crt.b74b553d with IP's: [192.168.83.126 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 02:10:49.601357  173218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.crt.b74b553d ...
	I1004 02:10:49.601390  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.crt.b74b553d: {Name:mkca3afb7b16c715bdad59cb672b4bce4eba486d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.601547  173218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.key.b74b553d ...
	I1004 02:10:49.601558  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.key.b74b553d: {Name:mk4e42dfa5646b7c1fc434a4aa8c5cfe1a36dacf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.601631  173218 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.crt.b74b553d -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.crt
	I1004 02:10:49.601693  173218 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.key.b74b553d -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.key
	I1004 02:10:49.601745  173218 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.key
	I1004 02:10:49.601758  173218 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.crt with IP's: []
	I1004 02:10:49.731104  173218 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.crt ...
	I1004 02:10:49.731135  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.crt: {Name:mk248d9c4e2c9373c59596303cf4cffc038c9622 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.731298  173218 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.key ...
	I1004 02:10:49.731310  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.key: {Name:mk7967dda2a448732cd910671e62d20fe37a378e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:10:49.731467  173218 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 02:10:49.731506  173218 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 02:10:49.731519  173218 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 02:10:49.731540  173218 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 02:10:49.731562  173218 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:10:49.731584  173218 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 02:10:49.731623  173218 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:10:49.732181  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 02:10:49.758605  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:10:49.783717  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:10:49.810484  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/kindnet-171116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 02:10:49.835941  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:10:49.861629  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:10:49.885886  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:10:49.910659  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:10:49.938652  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 02:10:49.963927  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:10:49.988420  173218 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 02:10:50.014063  173218 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:10:50.031731  173218 ssh_runner.go:195] Run: openssl version
	I1004 02:10:50.037612  173218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 02:10:50.048471  173218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 02:10:50.053175  173218 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 02:10:50.053227  173218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 02:10:50.059541  173218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 02:10:50.070050  173218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:10:50.079893  173218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:10:50.084812  173218 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:10:50.084877  173218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:10:50.091453  173218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:10:50.102447  173218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 02:10:50.113929  173218 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 02:10:50.119017  173218 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 02:10:50.119090  173218 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 02:10:50.124629  173218 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 02:10:50.135542  173218 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 02:10:50.140141  173218 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 02:10:50.140205  173218 kubeadm.go:404] StartCluster: {Name:kindnet-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:kindnet-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.126 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:10:50.140310  173218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:10:50.140368  173218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:10:50.183535  173218 cri.go:89] found id: ""
	I1004 02:10:50.183649  173218 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:10:50.193627  173218 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:10:50.202962  173218 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:10:50.213975  173218 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:10:50.214021  173218 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:10:50.447503  173218 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:10:51.617345  172863 addons.go:502] enable addons completed in 5.789309765s: enabled=[storage-provisioner default-storageclass]
	I1004 02:10:53.588435  172863 pod_ready.go:102] pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace has status "Ready":"False"
	I1004 02:10:55.588606  172863 pod_ready.go:102] pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace has status "Ready":"False"
	I1004 02:10:58.087053  172863 pod_ready.go:102] pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace has status "Ready":"False"
	I1004 02:11:00.088163  172863 pod_ready.go:102] pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace has status "Ready":"False"
	I1004 02:11:02.086885  172863 pod_ready.go:97] error getting pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-4h86x" not found
	I1004 02:11:02.086911  172863 pod_ready.go:81] duration metric: took 15.822070466s waiting for pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace to be "Ready" ...
	E1004 02:11:02.086922  172863 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-4h86x" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-4h86x" not found
	I1004 02:11:02.086929  172863 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.092752  172863 pod_ready.go:92] pod "etcd-auto-171116" in "kube-system" namespace has status "Ready":"True"
	I1004 02:11:02.092776  172863 pod_ready.go:81] duration metric: took 5.842273ms waiting for pod "etcd-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.092785  172863 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.099722  172863 pod_ready.go:92] pod "kube-apiserver-auto-171116" in "kube-system" namespace has status "Ready":"True"
	I1004 02:11:02.099749  172863 pod_ready.go:81] duration metric: took 6.956251ms waiting for pod "kube-apiserver-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.099761  172863 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.106504  172863 pod_ready.go:92] pod "kube-controller-manager-auto-171116" in "kube-system" namespace has status "Ready":"True"
	I1004 02:11:02.106524  172863 pod_ready.go:81] duration metric: took 6.756172ms waiting for pod "kube-controller-manager-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.106533  172863 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-8jl5r" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.111920  172863 pod_ready.go:92] pod "kube-proxy-8jl5r" in "kube-system" namespace has status "Ready":"True"
	I1004 02:11:02.111941  172863 pod_ready.go:81] duration metric: took 5.402562ms waiting for pod "kube-proxy-8jl5r" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.111950  172863 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.285611  172863 pod_ready.go:92] pod "kube-scheduler-auto-171116" in "kube-system" namespace has status "Ready":"True"
	I1004 02:11:02.285638  172863 pod_ready.go:81] duration metric: took 173.681527ms waiting for pod "kube-scheduler-auto-171116" in "kube-system" namespace to be "Ready" ...
	I1004 02:11:02.285646  172863 pod_ready.go:38] duration metric: took 16.100776083s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:11:02.285661  172863 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:11:02.285717  172863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:11:02.301876  172863 api_server.go:72] duration metric: took 16.415682871s to wait for apiserver process to appear ...
	I1004 02:11:02.301900  172863 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:11:02.301915  172863 api_server.go:253] Checking apiserver healthz at https://192.168.72.39:8443/healthz ...
	I1004 02:11:02.306771  172863 api_server.go:279] https://192.168.72.39:8443/healthz returned 200:
	ok
	I1004 02:11:02.308445  172863 api_server.go:141] control plane version: v1.28.2
	I1004 02:11:02.308474  172863 api_server.go:131] duration metric: took 6.565736ms to wait for apiserver health ...
	I1004 02:11:02.308485  172863 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:11:02.488283  172863 system_pods.go:59] 7 kube-system pods found
	I1004 02:11:02.488325  172863 system_pods.go:61] "coredns-5dd5756b68-k5rzg" [3d5ba246-6127-4698-a79b-31e8c1b81bc2] Running
	I1004 02:11:02.488333  172863 system_pods.go:61] "etcd-auto-171116" [750980ba-5a18-4d3a-96a1-3ce9d06be0f2] Running
	I1004 02:11:02.488340  172863 system_pods.go:61] "kube-apiserver-auto-171116" [c5b8fe39-0c14-4b5f-96d7-bd4ad6f5fa08] Running
	I1004 02:11:02.488349  172863 system_pods.go:61] "kube-controller-manager-auto-171116" [6000b9b3-d3ce-41f9-b5dc-7ae0e89e82e8] Running
	I1004 02:11:02.488355  172863 system_pods.go:61] "kube-proxy-8jl5r" [6ae84a96-9cbb-4444-aca7-d471ee4b9fb8] Running
	I1004 02:11:02.488362  172863 system_pods.go:61] "kube-scheduler-auto-171116" [d5feb29b-37fd-43c5-a2b1-4e030328997d] Running
	I1004 02:11:02.488368  172863 system_pods.go:61] "storage-provisioner" [1a1a838f-0f99-410f-926b-e380d86fef71] Running
	I1004 02:11:02.488377  172863 system_pods.go:74] duration metric: took 179.884724ms to wait for pod list to return data ...
	I1004 02:11:02.488392  172863 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:11:02.685505  172863 default_sa.go:45] found service account: "default"
	I1004 02:11:02.685538  172863 default_sa.go:55] duration metric: took 197.13837ms for default service account to be created ...
	I1004 02:11:02.685549  172863 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:11:02.889668  172863 system_pods.go:86] 7 kube-system pods found
	I1004 02:11:02.889697  172863 system_pods.go:89] "coredns-5dd5756b68-k5rzg" [3d5ba246-6127-4698-a79b-31e8c1b81bc2] Running
	I1004 02:11:02.889703  172863 system_pods.go:89] "etcd-auto-171116" [750980ba-5a18-4d3a-96a1-3ce9d06be0f2] Running
	I1004 02:11:02.889710  172863 system_pods.go:89] "kube-apiserver-auto-171116" [c5b8fe39-0c14-4b5f-96d7-bd4ad6f5fa08] Running
	I1004 02:11:02.889719  172863 system_pods.go:89] "kube-controller-manager-auto-171116" [6000b9b3-d3ce-41f9-b5dc-7ae0e89e82e8] Running
	I1004 02:11:02.889727  172863 system_pods.go:89] "kube-proxy-8jl5r" [6ae84a96-9cbb-4444-aca7-d471ee4b9fb8] Running
	I1004 02:11:02.889735  172863 system_pods.go:89] "kube-scheduler-auto-171116" [d5feb29b-37fd-43c5-a2b1-4e030328997d] Running
	I1004 02:11:02.889741  172863 system_pods.go:89] "storage-provisioner" [1a1a838f-0f99-410f-926b-e380d86fef71] Running
	I1004 02:11:02.889749  172863 system_pods.go:126] duration metric: took 204.193987ms to wait for k8s-apps to be running ...
	I1004 02:11:02.889757  172863 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:11:02.889811  172863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:11:02.905802  172863 system_svc.go:56] duration metric: took 16.034795ms WaitForService to wait for kubelet.
	I1004 02:11:02.905831  172863 kubeadm.go:581] duration metric: took 17.019645281s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 02:11:02.905866  172863 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:11:03.085302  172863 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 02:11:03.085342  172863 node_conditions.go:123] node cpu capacity is 2
	I1004 02:11:03.085354  172863 node_conditions.go:105] duration metric: took 179.483347ms to run NodePressure ...
	I1004 02:11:03.085365  172863 start.go:228] waiting for startup goroutines ...
	I1004 02:11:03.085371  172863 start.go:233] waiting for cluster config update ...
	I1004 02:11:03.085380  172863 start.go:242] writing updated cluster config ...
	I1004 02:11:03.085628  172863 ssh_runner.go:195] Run: rm -f paused
	I1004 02:11:03.144180  172863 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 02:11:03.146676  172863 out.go:177] * Done! kubectl is now configured to use "auto-171116" cluster and "default" namespace by default
	I1004 02:11:03.584705  173218 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:11:03.584777  173218 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:11:03.584860  173218 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:11:03.584969  173218 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:11:03.585104  173218 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:11:03.585180  173218 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:11:03.587503  173218 out.go:204]   - Generating certificates and keys ...
	I1004 02:11:03.587601  173218 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:11:03.587684  173218 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:11:03.587776  173218 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:11:03.587860  173218 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:11:03.587956  173218 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:11:03.588026  173218 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 02:11:03.588118  173218 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 02:11:03.588284  173218 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kindnet-171116 localhost] and IPs [192.168.83.126 127.0.0.1 ::1]
	I1004 02:11:03.588359  173218 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 02:11:03.588499  173218 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kindnet-171116 localhost] and IPs [192.168.83.126 127.0.0.1 ::1]
	I1004 02:11:03.588579  173218 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:11:03.588660  173218 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:11:03.588713  173218 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 02:11:03.588784  173218 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:11:03.588844  173218 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:11:03.588915  173218 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:11:03.588985  173218 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:11:03.589049  173218 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:11:03.589142  173218 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:11:03.589225  173218 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:11:03.590855  173218 out.go:204]   - Booting up control plane ...
	I1004 02:11:03.590975  173218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:11:03.591126  173218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:11:03.591231  173218 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:11:03.591363  173218 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:11:03.591511  173218 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:11:03.591678  173218 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:11:03.591891  173218 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:11:03.592023  173218 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.503628 seconds
	I1004 02:11:03.592159  173218 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:11:03.592324  173218 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:11:03.592392  173218 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:11:03.592601  173218 kubeadm.go:322] [mark-control-plane] Marking the node kindnet-171116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:11:03.592674  173218 kubeadm.go:322] [bootstrap-token] Using token: l6ttn0.ori139qoqxere4um
	I1004 02:11:03.595384  173218 out.go:204]   - Configuring RBAC rules ...
	I1004 02:11:03.595535  173218 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:11:03.595644  173218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:11:03.595819  173218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:11:03.596024  173218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:11:03.596207  173218 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:11:03.596343  173218 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:11:03.596460  173218 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:11:03.596526  173218 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:11:03.596594  173218 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:11:03.596603  173218 kubeadm.go:322] 
	I1004 02:11:03.596684  173218 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:11:03.596693  173218 kubeadm.go:322] 
	I1004 02:11:03.596797  173218 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:11:03.596807  173218 kubeadm.go:322] 
	I1004 02:11:03.596840  173218 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:11:03.596919  173218 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:11:03.596989  173218 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:11:03.596998  173218 kubeadm.go:322] 
	I1004 02:11:03.597071  173218 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:11:03.597080  173218 kubeadm.go:322] 
	I1004 02:11:03.597160  173218 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:11:03.597170  173218 kubeadm.go:322] 
	I1004 02:11:03.597246  173218 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:11:03.597341  173218 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:11:03.597429  173218 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:11:03.597438  173218 kubeadm.go:322] 
	I1004 02:11:03.597550  173218 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:11:03.597650  173218 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:11:03.597660  173218 kubeadm.go:322] 
	I1004 02:11:03.597758  173218 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l6ttn0.ori139qoqxere4um \
	I1004 02:11:03.597915  173218 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:11:03.597950  173218 kubeadm.go:322] 	--control-plane 
	I1004 02:11:03.597960  173218 kubeadm.go:322] 
	I1004 02:11:03.598080  173218 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:11:03.598091  173218 kubeadm.go:322] 
	I1004 02:11:03.598185  173218 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l6ttn0.ori139qoqxere4um \
	I1004 02:11:03.598362  173218 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:11:03.598381  173218 cni.go:84] Creating CNI manager for "kindnet"
	I1004 02:11:03.600743  173218 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1004 02:11:03.602745  173218 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 02:11:03.629028  173218 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 02:11:03.629052  173218 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1004 02:11:03.756026  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 02:11:04.925904  173218 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.169832523s)
	I1004 02:11:04.925967  173218 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:11:04.926089  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:04.926102  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=kindnet-171116 minikube.k8s.io/updated_at=2023_10_04T02_11_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:05.101477  173218 ops.go:34] apiserver oom_adj: -16
	I1004 02:11:05.101595  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:05.231997  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:05.853468  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:06.354154  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:06.853898  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:07.354162  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:07.853881  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:08.353945  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:08.853414  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:09.354292  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:09.854180  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:10.353349  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:10.853433  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:11.354301  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:11.853903  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:12.353941  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:12.853716  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:13.353724  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:13.854307  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:14.354078  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:14.854372  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:15.353958  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:15.853605  173218 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:11:16.096240  173218 kubeadm.go:1081] duration metric: took 11.170233519s to wait for elevateKubeSystemPrivileges.
	I1004 02:11:16.096272  173218 kubeadm.go:406] StartCluster complete in 25.956072277s
	I1004 02:11:16.096301  173218 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:11:16.096372  173218 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:11:16.098849  173218 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:11:16.101728  173218 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:11:16.101986  173218 config.go:182] Loaded profile config "kindnet-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:11:16.102033  173218 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:11:16.102091  173218 addons.go:69] Setting storage-provisioner=true in profile "kindnet-171116"
	I1004 02:11:16.102109  173218 addons.go:231] Setting addon storage-provisioner=true in "kindnet-171116"
	I1004 02:11:16.102152  173218 host.go:66] Checking if "kindnet-171116" exists ...
	I1004 02:11:16.102543  173218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:11:16.102567  173218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:11:16.102645  173218 addons.go:69] Setting default-storageclass=true in profile "kindnet-171116"
	I1004 02:11:16.102662  173218 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-171116"
	I1004 02:11:16.103028  173218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:11:16.103048  173218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:11:16.125455  173218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1004 02:11:16.128135  173218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34221
	I1004 02:11:16.128684  173218 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:11:16.129431  173218 main.go:141] libmachine: Using API Version  1
	I1004 02:11:16.129454  173218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:11:16.129960  173218 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:11:16.130322  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetState
	I1004 02:11:16.131686  173218 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:11:16.132525  173218 main.go:141] libmachine: Using API Version  1
	I1004 02:11:16.132544  173218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:11:16.132976  173218 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:11:16.133553  173218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:11:16.133581  173218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:11:16.136468  173218 addons.go:231] Setting addon default-storageclass=true in "kindnet-171116"
	I1004 02:11:16.136512  173218 host.go:66] Checking if "kindnet-171116" exists ...
	I1004 02:11:16.136931  173218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:11:16.136965  173218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:11:16.157097  173218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37407
	I1004 02:11:16.157454  173218 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I1004 02:11:16.157807  173218 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:11:16.157860  173218 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:11:16.158471  173218 main.go:141] libmachine: Using API Version  1
	I1004 02:11:16.158492  173218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:11:16.158636  173218 main.go:141] libmachine: Using API Version  1
	I1004 02:11:16.158647  173218 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:11:16.159220  173218 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:11:16.159310  173218 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:11:16.159439  173218 main.go:141] libmachine: (kindnet-171116) Calling .GetState
	I1004 02:11:16.159932  173218 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:11:16.159958  173218 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:11:16.161949  173218 main.go:141] libmachine: (kindnet-171116) Calling .DriverName
	I1004 02:11:16.164055  173218 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:50:01 UTC, ends at Wed 2023-10-04 02:11:16 UTC. --
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.842038151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9e619a9c-314d-4cfc-ab92-c607369085b2 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.847623817Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=85d57a20-5d77-4ff1-a28c-073b21c938b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.852477735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385476852456330,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=85d57a20-5d77-4ff1-a28c-073b21c938b9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.853067912Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a493d8ea-5990-4142-8764-5f354f290967 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.853206398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a493d8ea-5990-4142-8764-5f354f290967 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.853365570Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a493d8ea-5990-4142-8764-5f354f290967 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.896397921Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=b973225d-147f-4286-9112-5ad6fe74477d name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.896646758Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c1d1d8ba-3421-4e49-9138-9efdd0392e83,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384527678600604,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-10-04T01:55:27.343048503Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:edff9363c1d0ec3238c59db54765e464374c432c3f8b20c54968909c07c471f5,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-27696,Uid:3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384527436354273,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-27696,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb
1,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:55:27.098470019Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-79qrq,Uid:0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384526063334453,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:55:25.720977219Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&PodSandboxMetadata{Name:kube-proxy-f99th,Uid:984b2db7-6f82-45db-888f-da
52230d1bc5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384525383012237,Labels:map[string]string{controller-revision-hash: 5cbdb8dcbd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-10-04T01:55:24.740549031Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-509298,Uid:84e13fece64748bbed1ba334a70e913c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384502727359109,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 84e13fece64748bbed1ba334a70e913c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 84e13fece64748bbed1ba334a70e913c,kubernetes.io/config.seen: 2023-10-04T01:55:02.187706276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-509298,Uid:7ad35b21805a52323c0ee89e7610dce9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384502710887642,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.170:2379,kubernetes.io/config.hash: 7ad35b21805a52323c0ee89e7610dce9,kubernetes.io/config.seen: 2023-10-04T01:55:02.187703781Z,kubernetes.io/config.source: file,},Ru
ntimeHandler:,},&PodSandbox{Id:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-509298,Uid:76e7f7a31209d4cfecd5cfd46ce6d1d1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384502696780930,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.170:8443,kubernetes.io/config.hash: 76e7f7a31209d4cfecd5cfd46ce6d1d1,kubernetes.io/config.seen: 2023-10-04T01:55:02.187705304Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-509298,Uid:e77f6a6312f38decf908ee639e1f4e2b,Names
pace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1696384502686987835,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e77f6a6312f38decf908ee639e1f4e2b,kubernetes.io/config.seen: 2023-10-04T01:55:02.187698746Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b973225d-147f-4286-9112-5ad6fe74477d name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.897911828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e67c297f-b17b-4ebd-9db3-4fb3e79931a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.898021249Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e67c297f-b17b-4ebd-9db3-4fb3e79931a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.898328020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e67c297f-b17b-4ebd-9db3-4fb3e79931a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.899939040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c66c032a-9651-4de5-839d-bff2a2549efb name=/runtime.v1.RuntimeService/Version
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.900009410Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c66c032a-9651-4de5-839d-bff2a2549efb name=/runtime.v1.RuntimeService/Version
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.901612046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3a86de94-41a9-4b21-99bd-e5d4e1df9c06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.902011103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385476901997470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=3a86de94-41a9-4b21-99bd-e5d4e1df9c06 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.902733136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=43393f83-6d23-49f6-82a6-230ca2b59d70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.902785537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=43393f83-6d23-49f6-82a6-230ca2b59d70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.903015690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=43393f83-6d23-49f6-82a6-230ca2b59d70 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.944385398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=281b04fd-fbb6-4597-88c3-30566873736b name=/runtime.v1.RuntimeService/Version
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.944497757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=281b04fd-fbb6-4597-88c3-30566873736b name=/runtime.v1.RuntimeService/Version
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.946402641Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a1a648c9-162e-453f-a786-91025cc7c27e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.946889981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385476946872769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a1a648c9-162e-453f-a786-91025cc7c27e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.947774386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a58d1a07-b592-4df3-94ad-218fa43de6a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.947877883Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a58d1a07-b592-4df3-94ad-218fa43de6a7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:11:16 embed-certs-509298 crio[728]: time="2023-10-04 02:11:16.948069652Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce,PodSandboxId:5e11df276a01bac4aecb08f3eb091f2d689b27fce2565c120fc4d32588b95e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384528649665440,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1d1d8ba-3421-4e49-9138-9efdd0392e83,},Annotations:map[string]string{io.kubernetes.container.hash: 8f19f6ba,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646,PodSandboxId:12445b59fdb15a962d7506de57af413e8aaf3e0e8105fc531a45d5c7bed9cbb6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696384527856534588,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-79qrq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bbb5cfe-1fbf-426a-9866-0d5ce92e0519,},Annotations:map[string]string{io.kubernetes.container.hash: 2d74ec0e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61,PodSandboxId:c0526e00426afbe0513d5b2024a811cdcb13d8b91e368f99286c796b6fc81b11,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696384526651199820,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f99th,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 984b2db7-6f82-45db-888f-da52230d1bc5,},Annotations:map[string]string{io.kubernetes.container.hash: 6fe16861,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f,PodSandboxId:3c6fed7f87557cd6fa0ed54dcdd1e03021f7d652bf098d1d3b08ec302c2cfebe,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384503799883516,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ad35b21805a52323c0ee89e7610dce9,},An
notations:map[string]string{io.kubernetes.container.hash: 7f7d7420,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a,PodSandboxId:007f4f9fa55d542fabc87361eb79a720b7d79b9565e926b43c8c293accb895c4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384503720228286,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e77f6a6312f38decf908ee639e1f4e2b,},Annotations:
map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f,PodSandboxId:1588b854bcd2da5549d4be6646030cbd198aeb35790312c7511c2005771741ed,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384503376454374,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76e7f7a31209d4cfecd5cfd46ce6d1d1,},Annotations:map[string
]string{io.kubernetes.container.hash: a013f2b0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab,PodSandboxId:2bb24e65b50839a2931175407d9b042ea2c4db0b9a4ce5f6fad33347832d3395,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384503157579460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-509298,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 84e13fece64748bbed1ba334a70e913
c,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a58d1a07-b592-4df3-94ad-218fa43de6a7 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0b97474e8630e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   5e11df276a01b       storage-provisioner
	f3316d73aebf8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   12445b59fdb15       coredns-5dd5756b68-79qrq
	a46b80885b26c       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   15 minutes ago      Running             kube-proxy                0                   c0526e00426af       kube-proxy-f99th
	7f21b00e9dc48       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   3c6fed7f87557       etcd-embed-certs-509298
	0af148e957984       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   16 minutes ago      Running             kube-scheduler            2                   007f4f9fa55d5       kube-scheduler-embed-certs-509298
	a32990e0a3fdd       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   16 minutes ago      Running             kube-apiserver            2                   1588b854bcd2d       kube-apiserver-embed-certs-509298
	f6d6bd9377fe5       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   16 minutes ago      Running             kube-controller-manager   2                   2bb24e65b5083       kube-controller-manager-embed-certs-509298
	
	* 
	* ==> coredns [f3316d73aebf8ebd29efb3e164c24d995b712bdb7a2708a28e740d983628d646] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-509298
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-509298
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=embed-certs-509298
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_55_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:55:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-509298
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 02:11:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:10:51 +0000   Wed, 04 Oct 2023 01:55:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:10:51 +0000   Wed, 04 Oct 2023 01:55:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:10:51 +0000   Wed, 04 Oct 2023 01:55:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:10:51 +0000   Wed, 04 Oct 2023 01:55:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.170
	  Hostname:    embed-certs-509298
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ca3d4c150cb4b8d88c9054d5234c3d2
	  System UUID:                1ca3d4c1-50cb-4b8d-88c9-054d5234c3d2
	  Boot ID:                    63533b45-ed5a-431a-bd38-01bf2e9c1790
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-79qrq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-509298                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-embed-certs-509298             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-embed-certs-509298    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-f99th                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-509298             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-27696               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-509298 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-509298 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-509298 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             16m   kubelet          Node embed-certs-509298 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                16m   kubelet          Node embed-certs-509298 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-509298 event: Registered Node embed-certs-509298 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.076956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.495858] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Oct 4 01:50] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.146804] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.537969] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.791306] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.136700] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.176354] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.117843] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[  +0.237591] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[ +17.532316] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +22.511843] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 4 01:55] systemd-fstab-generator[3455]: Ignoring "noauto" for root device
	[ +10.304311] systemd-fstab-generator[3788]: Ignoring "noauto" for root device
	[ +14.271481] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [7f21b00e9dc48612c94de7485be5b9019d1127355c38c08b8241a2adf592c67f] <==
	* {"level":"info","ts":"2023-10-04T01:58:24.099496Z","caller":"traceutil/trace.go:171","msg":"trace[677724988] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"207.843989ms","start":"2023-10-04T01:58:23.89161Z","end":"2023-10-04T01:58:24.099454Z","steps":["trace[677724988] 'process raft request'  (duration: 207.640893ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:58:24.503559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.020387ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T01:58:24.50379Z","caller":"traceutil/trace.go:171","msg":"trace[422866158] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:593; }","duration":"384.347952ms","start":"2023-10-04T01:58:24.119415Z","end":"2023-10-04T01:58:24.503763Z","steps":["trace[422866158] 'range keys from in-memory index tree'  (duration: 383.941467ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:58:24.503944Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:58:24.119395Z","time spent":"384.456238ms","remote":"127.0.0.1:46034","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-10-04T02:05:07.045097Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2023-10-04T02:05:07.047724Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.154297ms","hash":1247089522}
	{"level":"info","ts":"2023-10-04T02:05:07.047802Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1247089522,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2023-10-04T02:10:07.05447Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":920}
	{"level":"info","ts":"2023-10-04T02:10:07.056652Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":920,"took":"1.709439ms","hash":277204111}
	{"level":"info","ts":"2023-10-04T02:10:07.056717Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":277204111,"revision":920,"compact-revision":677}
	{"level":"warn","ts":"2023-10-04T02:10:21.10094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.492282ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1555864995736649987 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.170\" mod_revision:1166 > success:<request_put:<key:\"/registry/masterleases/192.168.50.170\" value_size:67 lease:1555864995736649983 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.170\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-04T02:10:21.101373Z","caller":"traceutil/trace.go:171","msg":"trace[179114581] transaction","detail":"{read_only:false; response_revision:1175; number_of_response:1; }","duration":"529.434658ms","start":"2023-10-04T02:10:20.571887Z","end":"2023-10-04T02:10:21.101321Z","steps":["trace[179114581] 'process raft request'  (duration: 529.333621ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:21.102345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T02:10:20.571867Z","time spent":"530.372269ms","remote":"127.0.0.1:46070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1173 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-10-04T02:10:21.101768Z","caller":"traceutil/trace.go:171","msg":"trace[382761711] transaction","detail":"{read_only:false; response_revision:1174; number_of_response:1; }","duration":"534.143586ms","start":"2023-10-04T02:10:20.567606Z","end":"2023-10-04T02:10:21.10175Z","steps":["trace[382761711] 'process raft request'  (duration: 305.909964ms)","trace[382761711] 'compare'  (duration: 224.189326ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-04T02:10:21.102892Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T02:10:20.567555Z","time spent":"535.286939ms","remote":"127.0.0.1:46040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.170\" mod_revision:1166 > success:<request_put:<key:\"/registry/masterleases/192.168.50.170\" value_size:67 lease:1555864995736649983 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.170\" > >"}
	{"level":"info","ts":"2023-10-04T02:10:21.104077Z","caller":"traceutil/trace.go:171","msg":"trace[1286605610] transaction","detail":"{read_only:false; response_revision:1176; number_of_response:1; }","duration":"223.061644ms","start":"2023-10-04T02:10:20.881003Z","end":"2023-10-04T02:10:21.104064Z","steps":["trace[1286605610] 'process raft request'  (duration: 222.337327ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:10:49.433448Z","caller":"traceutil/trace.go:171","msg":"trace[387478051] transaction","detail":"{read_only:false; response_revision:1198; number_of_response:1; }","duration":"106.415949ms","start":"2023-10-04T02:10:49.327016Z","end":"2023-10-04T02:10:49.433432Z","steps":["trace[387478051] 'process raft request'  (duration: 106.320429ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:10:51.061761Z","caller":"traceutil/trace.go:171","msg":"trace[352240902] linearizableReadLoop","detail":"{readStateIndex:1404; appliedIndex:1403; }","duration":"338.851026ms","start":"2023-10-04T02:10:50.722887Z","end":"2023-10-04T02:10:51.061738Z","steps":["trace[352240902] 'read index received'  (duration: 338.467727ms)","trace[352240902] 'applied index is now lower than readState.Index'  (duration: 381.803µs)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T02:10:51.061932Z","caller":"traceutil/trace.go:171","msg":"trace[1123106475] transaction","detail":"{read_only:false; response_revision:1199; number_of_response:1; }","duration":"587.241411ms","start":"2023-10-04T02:10:50.474525Z","end":"2023-10-04T02:10:51.061766Z","steps":["trace[1123106475] 'process raft request'  (duration: 586.873138ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:51.062007Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.625243ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas/\" range_end:\"/registry/flowschemas0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-10-04T02:10:51.06208Z","caller":"traceutil/trace.go:171","msg":"trace[1474299161] range","detail":"{range_begin:/registry/flowschemas/; range_end:/registry/flowschemas0; response_count:0; response_revision:1199; }","duration":"216.690484ms","start":"2023-10-04T02:10:50.845374Z","end":"2023-10-04T02:10:51.062065Z","steps":["trace[1474299161] 'agreement among raft nodes before linearized reading'  (duration: 216.593799ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:51.061948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"339.064607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-10-04T02:10:51.062037Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T02:10:50.474509Z","time spent":"587.482331ms","remote":"127.0.0.1:46040","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.170\" mod_revision:1191 > success:<request_put:<key:\"/registry/masterleases/192.168.50.170\" value_size:67 lease:1555864995736650135 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.170\" > >"}
	{"level":"info","ts":"2023-10-04T02:10:51.062846Z","caller":"traceutil/trace.go:171","msg":"trace[2023742377] range","detail":"{range_begin:/registry/jobs/; range_end:/registry/jobs0; response_count:0; response_revision:1199; }","duration":"339.956904ms","start":"2023-10-04T02:10:50.722862Z","end":"2023-10-04T02:10:51.062819Z","steps":["trace[2023742377] 'agreement among raft nodes before linearized reading'  (duration: 338.990859ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:51.063002Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T02:10:50.722848Z","time spent":"340.142579ms","remote":"127.0.0.1:46086","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":28,"request content":"key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" count_only:true "}
	
	* 
	* ==> kernel <==
	*  02:11:17 up 21 min,  0 users,  load average: 0.33, 0.27, 0.27
	Linux embed-certs-509298 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [a32990e0a3fdd5cff148b41ef518f787f0e890f0c6d0d082ec27af6ee369222f] <==
	* I1004 02:10:09.431425       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:10:09.431334       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:10:09.431535       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:10:09.432484       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:10:21.104093       1 trace.go:236] Trace[608604371]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.170,type:*v1.Endpoints,resource:apiServerIPInfo (04-Oct-2023 02:10:20.348) (total time: 755ms):
	Trace[608604371]: ---"Transaction prepared" 215ms (02:10:20.567)
	Trace[608604371]: ---"Txn call completed" 536ms (02:10:21.103)
	Trace[608604371]: [755.090117ms] [755.090117ms] END
	I1004 02:10:21.104864       1 trace.go:236] Trace[1802255660]: "Update" accept:application/json, */*,audit-id:0b0fa9ea-0571-44d8-bd21-8e54123f08e0,client:192.168.50.170,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (04-Oct-2023 02:10:20.569) (total time: 534ms):
	Trace[1802255660]: ["GuaranteedUpdate etcd3" audit-id:0b0fa9ea-0571-44d8-bd21-8e54123f08e0,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 534ms (02:10:20.570)
	Trace[1802255660]:  ---"Txn call completed" 533ms (02:10:21.104)]
	Trace[1802255660]: [534.872993ms] [534.872993ms] END
	I1004 02:10:51.063664       1 trace.go:236] Trace[511202142]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.170,type:*v1.Endpoints,resource:apiServerIPInfo (04-Oct-2023 02:10:50.350) (total time: 713ms):
	Trace[511202142]: ---"Transaction prepared" 120ms (02:10:50.474)
	Trace[511202142]: ---"Txn call completed" 589ms (02:10:51.063)
	Trace[511202142]: [713.555049ms] [713.555049ms] END
	I1004 02:11:08.395766       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:11:09.432117       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:11:09.432325       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:11:09.432354       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:11:09.433356       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:11:09.433432       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:11:09.433440       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [f6d6bd9377fe5ad6353456b0c6445b8854612271958f7dcfa69e86580e35a0ab] <==
	* I1004 02:05:25.460976       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:05:54.817083       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:05:55.470717       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:06:24.824893       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:06:25.480941       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:06:26.421658       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="183.064µs"
	I1004 02:06:41.418930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.153µs"
	E1004 02:06:54.831463       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:06:55.492415       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:07:24.838764       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:07:25.502421       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:07:54.844866       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:07:55.513737       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:08:24.850591       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:08:25.523016       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:08:54.858439       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:08:55.532863       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:09:24.870425       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:25.544872       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:09:54.876399       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:55.555541       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:10:24.888489       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:10:25.568500       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:10:54.894684       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:10:55.578989       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [a46b80885b26c3d6e3e824aec544854787d2d1a1f65637b2317f2a59219b6b61] <==
	* I1004 01:55:28.166208       1 server_others.go:69] "Using iptables proxy"
	I1004 01:55:28.221702       1 node.go:141] Successfully retrieved node IP: 192.168.50.170
	I1004 01:55:28.434278       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:55:28.434353       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:55:28.437589       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:55:28.437672       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:55:28.437869       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:55:28.437904       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:55:28.439989       1 config.go:188] "Starting service config controller"
	I1004 01:55:28.440043       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:55:28.440073       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:55:28.440088       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:55:28.443914       1 config.go:315] "Starting node config controller"
	I1004 01:55:28.443952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:55:28.540415       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:55:28.540491       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:55:28.544397       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [0af148e95798416b6ca17a20b77a250b18d52c98ff936e914fce22d37e310d5a] <==
	* W1004 01:55:08.512585       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:55:08.512671       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 01:55:09.315267       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:55:09.315327       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 01:55:09.364264       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.364357       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:09.399692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:55:09.399746       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 01:55:09.466840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.466900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:09.511859       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.511917       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:09.538021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:55:09.538233       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 01:55:09.555874       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:55:09.555971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 01:55:09.561978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 01:55:09.562031       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 01:55:09.632604       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:55:09.632728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 01:55:09.677718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1004 01:55:09.677818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1004 01:55:10.033937       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 01:55:10.033987       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1004 01:55:11.895035       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:50:01 UTC, ends at Wed 2023-10-04 02:11:17 UTC. --
	Oct 04 02:08:42 embed-certs-509298 kubelet[3795]: E1004 02:08:42.402622    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:08:55 embed-certs-509298 kubelet[3795]: E1004 02:08:55.401980    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:09:08 embed-certs-509298 kubelet[3795]: E1004 02:09:08.403682    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:09:12 embed-certs-509298 kubelet[3795]: E1004 02:09:12.538539    3795 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:09:12 embed-certs-509298 kubelet[3795]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:09:12 embed-certs-509298 kubelet[3795]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:09:12 embed-certs-509298 kubelet[3795]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:09:19 embed-certs-509298 kubelet[3795]: E1004 02:09:19.402475    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:09:31 embed-certs-509298 kubelet[3795]: E1004 02:09:31.402498    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:09:46 embed-certs-509298 kubelet[3795]: E1004 02:09:46.403720    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:10:00 embed-certs-509298 kubelet[3795]: E1004 02:10:00.404121    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:10:12 embed-certs-509298 kubelet[3795]: E1004 02:10:12.538053    3795 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:10:12 embed-certs-509298 kubelet[3795]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:10:12 embed-certs-509298 kubelet[3795]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:10:12 embed-certs-509298 kubelet[3795]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:10:12 embed-certs-509298 kubelet[3795]: E1004 02:10:12.539383    3795 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Oct 04 02:10:15 embed-certs-509298 kubelet[3795]: E1004 02:10:15.402924    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:10:29 embed-certs-509298 kubelet[3795]: E1004 02:10:29.402819    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:10:41 embed-certs-509298 kubelet[3795]: E1004 02:10:41.402604    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:10:52 embed-certs-509298 kubelet[3795]: E1004 02:10:52.403844    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:11:06 embed-certs-509298 kubelet[3795]: E1004 02:11:06.403385    3795 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-27696" podUID="3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1"
	Oct 04 02:11:12 embed-certs-509298 kubelet[3795]: E1004 02:11:12.536717    3795 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:11:12 embed-certs-509298 kubelet[3795]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:11:12 embed-certs-509298 kubelet[3795]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:11:12 embed-certs-509298 kubelet[3795]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	* 
	* ==> storage-provisioner [0b97474e8630e8bca9d82fc30e0076302ffe19f9b0b4ad51fc986ad04bf970ce] <==
	* I1004 01:55:28.819935       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:55:28.830585       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:55:28.830729       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 01:55:28.848802       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 01:55:28.849770       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-509298_1a85bd3b-850f-413c-97d5-ee7c672d97e1!
	I1004 01:55:28.849626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23f1fe67-f369-4c37-928b-269ee8b0516f", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-509298_1a85bd3b-850f-413c-97d5-ee7c672d97e1 became leader
	I1004 01:55:28.950294       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-509298_1a85bd3b-850f-413c-97d5-ee7c672d97e1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-509298 -n embed-certs-509298
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-509298 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-27696
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-509298 describe pod metrics-server-57f55c9bc5-27696
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-509298 describe pod metrics-server-57f55c9bc5-27696: exit status 1 (72.529012ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-27696" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-509298 describe pod metrics-server-57f55c9bc5-27696: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (405.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (296.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1004 02:05:33.291278  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 02:06:05.194308  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-273516 -n no-preload-273516
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:09:55.378797376 +0000 UTC m=+5187.749828409
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-273516 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-273516 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.747µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-273516 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-273516 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-273516 logs -n 25: (1.479473878s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p cert-expiration-528457                              | cert-expiration-528457       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-554732 | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | disable-driver-mounts-554732                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-487861             | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-487861                  | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273516                  | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-487861 sudo                              | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-509298                 | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| addons  | enable dashboard -p old-k8s-version-107182             | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:50 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-509298                                  | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-239802  | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC | 04 Oct 23 01:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC |                     |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-239802       | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC | 04 Oct 23 02:03 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 02:09 UTC | 04 Oct 23 02:09 UTC |
	| start   | -p auto-171116 --memory=3072                           | auto-171116                  | jenkins | v1.31.2 | 04 Oct 23 02:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 02:09:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:09:45.720802  172863 out.go:296] Setting OutFile to fd 1 ...
	I1004 02:09:45.721060  172863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:09:45.721072  172863 out.go:309] Setting ErrFile to fd 2...
	I1004 02:09:45.721079  172863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:09:45.721311  172863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 02:09:45.721949  172863 out.go:303] Setting JSON to false
	I1004 02:09:45.722935  172863 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10337,"bootTime":1696375049,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:09:45.722994  172863 start.go:138] virtualization: kvm guest
	I1004 02:09:45.725427  172863 out.go:177] * [auto-171116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:09:45.727447  172863 notify.go:220] Checking for updates...
	I1004 02:09:45.727458  172863 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 02:09:45.728986  172863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:09:45.730405  172863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:09:45.732720  172863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:09:45.735438  172863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 02:09:45.737160  172863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:09:45.739430  172863 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:09:45.739579  172863 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:09:45.739726  172863 config.go:182] Loaded profile config "no-preload-273516": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:09:45.739864  172863 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 02:09:45.782983  172863 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 02:09:45.784501  172863 start.go:298] selected driver: kvm2
	I1004 02:09:45.784521  172863 start.go:902] validating driver "kvm2" against <nil>
	I1004 02:09:45.784538  172863 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:09:45.785367  172863 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:09:45.785468  172863 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:09:45.803386  172863 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 02:09:45.803450  172863 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 02:09:45.803677  172863 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:09:45.803716  172863 cni.go:84] Creating CNI manager for ""
	I1004 02:09:45.803730  172863 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:09:45.803739  172863 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:09:45.803747  172863 start_flags.go:321] config:
	{Name:auto-171116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:auto-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:09:45.803894  172863 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:09:45.806157  172863 out.go:177] * Starting control plane node auto-171116 in cluster auto-171116
	I1004 02:09:45.807679  172863 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:09:45.807718  172863 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 02:09:45.807726  172863 cache.go:57] Caching tarball of preloaded images
	I1004 02:09:45.807866  172863 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 02:09:45.807890  172863 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 02:09:45.807974  172863 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/config.json ...
	I1004 02:09:45.807992  172863 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/auto-171116/config.json: {Name:mkb6c1c834fdc61e717e64f97b19202895fdec6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:09:45.808140  172863 start.go:365] acquiring machines lock for auto-171116: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 02:09:45.808173  172863 start.go:369] acquired machines lock for "auto-171116" in 17.272µs
	I1004 02:09:45.808195  172863 start.go:93] Provisioning new machine with config: &{Name:auto-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.2 ClusterName:auto-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:09:45.808264  172863 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 02:09:45.810044  172863 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1004 02:09:45.810163  172863 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:09:45.810194  172863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:09:45.824933  172863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33287
	I1004 02:09:45.825389  172863 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:09:45.825965  172863 main.go:141] libmachine: Using API Version  1
	I1004 02:09:45.825990  172863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:09:45.826390  172863 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:09:45.826608  172863 main.go:141] libmachine: (auto-171116) Calling .GetMachineName
	I1004 02:09:45.826831  172863 main.go:141] libmachine: (auto-171116) Calling .DriverName
	I1004 02:09:45.827072  172863 start.go:159] libmachine.API.Create for "auto-171116" (driver="kvm2")
	I1004 02:09:45.827104  172863 client.go:168] LocalClient.Create starting
	I1004 02:09:45.827141  172863 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 02:09:45.827177  172863 main.go:141] libmachine: Decoding PEM data...
	I1004 02:09:45.827192  172863 main.go:141] libmachine: Parsing certificate...
	I1004 02:09:45.827246  172863 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 02:09:45.827264  172863 main.go:141] libmachine: Decoding PEM data...
	I1004 02:09:45.827288  172863 main.go:141] libmachine: Parsing certificate...
	I1004 02:09:45.827323  172863 main.go:141] libmachine: Running pre-create checks...
	I1004 02:09:45.827338  172863 main.go:141] libmachine: (auto-171116) Calling .PreCreateCheck
	I1004 02:09:45.827747  172863 main.go:141] libmachine: (auto-171116) Calling .GetConfigRaw
	I1004 02:09:45.828140  172863 main.go:141] libmachine: Creating machine...
	I1004 02:09:45.828156  172863 main.go:141] libmachine: (auto-171116) Calling .Create
	I1004 02:09:45.828331  172863 main.go:141] libmachine: (auto-171116) Creating KVM machine...
	I1004 02:09:45.829476  172863 main.go:141] libmachine: (auto-171116) DBG | found existing default KVM network
	I1004 02:09:45.830875  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:45.830724  172886 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:bb:21} reservation:<nil>}
	I1004 02:09:45.831937  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:45.831833  172886 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c5:97:42} reservation:<nil>}
	I1004 02:09:45.833111  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:45.833030  172886 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr5 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:b0:63} reservation:<nil>}
	I1004 02:09:45.834683  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:45.834604  172886 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c5fd0}
	I1004 02:09:45.840767  172863 main.go:141] libmachine: (auto-171116) DBG | trying to create private KVM network mk-auto-171116 192.168.72.0/24...
	I1004 02:09:45.918272  172863 main.go:141] libmachine: (auto-171116) DBG | private KVM network mk-auto-171116 192.168.72.0/24 created
	I1004 02:09:45.918406  172863 main.go:141] libmachine: (auto-171116) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116 ...
	I1004 02:09:45.918477  172863 main.go:141] libmachine: (auto-171116) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 02:09:45.918680  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:45.918516  172886 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:09:45.918725  172863 main.go:141] libmachine: (auto-171116) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 02:09:46.153125  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:46.152986  172886 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/id_rsa...
	I1004 02:09:46.231038  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:46.230910  172886 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/auto-171116.rawdisk...
	I1004 02:09:46.231070  172863 main.go:141] libmachine: (auto-171116) DBG | Writing magic tar header
	I1004 02:09:46.231086  172863 main.go:141] libmachine: (auto-171116) DBG | Writing SSH key tar header
	I1004 02:09:46.231106  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:46.231019  172886 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116 ...
	I1004 02:09:46.231122  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116
	I1004 02:09:46.231171  172863 main.go:141] libmachine: (auto-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116 (perms=drwx------)
	I1004 02:09:46.231206  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 02:09:46.231218  172863 main.go:141] libmachine: (auto-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 02:09:46.231232  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:09:46.231261  172863 main.go:141] libmachine: (auto-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 02:09:46.231293  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 02:09:46.231311  172863 main.go:141] libmachine: (auto-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 02:09:46.231330  172863 main.go:141] libmachine: (auto-171116) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 02:09:46.231345  172863 main.go:141] libmachine: (auto-171116) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 02:09:46.231363  172863 main.go:141] libmachine: (auto-171116) Creating domain...
	I1004 02:09:46.231379  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 02:09:46.231399  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home/jenkins
	I1004 02:09:46.231413  172863 main.go:141] libmachine: (auto-171116) DBG | Checking permissions on dir: /home
	I1004 02:09:46.231433  172863 main.go:141] libmachine: (auto-171116) DBG | Skipping /home - not owner
	I1004 02:09:46.232497  172863 main.go:141] libmachine: (auto-171116) define libvirt domain using xml: 
	I1004 02:09:46.232516  172863 main.go:141] libmachine: (auto-171116) <domain type='kvm'>
	I1004 02:09:46.232527  172863 main.go:141] libmachine: (auto-171116)   <name>auto-171116</name>
	I1004 02:09:46.232536  172863 main.go:141] libmachine: (auto-171116)   <memory unit='MiB'>3072</memory>
	I1004 02:09:46.232545  172863 main.go:141] libmachine: (auto-171116)   <vcpu>2</vcpu>
	I1004 02:09:46.232554  172863 main.go:141] libmachine: (auto-171116)   <features>
	I1004 02:09:46.232562  172863 main.go:141] libmachine: (auto-171116)     <acpi/>
	I1004 02:09:46.232567  172863 main.go:141] libmachine: (auto-171116)     <apic/>
	I1004 02:09:46.232574  172863 main.go:141] libmachine: (auto-171116)     <pae/>
	I1004 02:09:46.232582  172863 main.go:141] libmachine: (auto-171116)     
	I1004 02:09:46.232588  172863 main.go:141] libmachine: (auto-171116)   </features>
	I1004 02:09:46.232596  172863 main.go:141] libmachine: (auto-171116)   <cpu mode='host-passthrough'>
	I1004 02:09:46.232603  172863 main.go:141] libmachine: (auto-171116)   
	I1004 02:09:46.232614  172863 main.go:141] libmachine: (auto-171116)   </cpu>
	I1004 02:09:46.232655  172863 main.go:141] libmachine: (auto-171116)   <os>
	I1004 02:09:46.232678  172863 main.go:141] libmachine: (auto-171116)     <type>hvm</type>
	I1004 02:09:46.232690  172863 main.go:141] libmachine: (auto-171116)     <boot dev='cdrom'/>
	I1004 02:09:46.232714  172863 main.go:141] libmachine: (auto-171116)     <boot dev='hd'/>
	I1004 02:09:46.232729  172863 main.go:141] libmachine: (auto-171116)     <bootmenu enable='no'/>
	I1004 02:09:46.232741  172863 main.go:141] libmachine: (auto-171116)   </os>
	I1004 02:09:46.232798  172863 main.go:141] libmachine: (auto-171116)   <devices>
	I1004 02:09:46.232833  172863 main.go:141] libmachine: (auto-171116)     <disk type='file' device='cdrom'>
	I1004 02:09:46.232851  172863 main.go:141] libmachine: (auto-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/boot2docker.iso'/>
	I1004 02:09:46.232866  172863 main.go:141] libmachine: (auto-171116)       <target dev='hdc' bus='scsi'/>
	I1004 02:09:46.232879  172863 main.go:141] libmachine: (auto-171116)       <readonly/>
	I1004 02:09:46.232891  172863 main.go:141] libmachine: (auto-171116)     </disk>
	I1004 02:09:46.232903  172863 main.go:141] libmachine: (auto-171116)     <disk type='file' device='disk'>
	I1004 02:09:46.232919  172863 main.go:141] libmachine: (auto-171116)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 02:09:46.232940  172863 main.go:141] libmachine: (auto-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/auto-171116/auto-171116.rawdisk'/>
	I1004 02:09:46.232959  172863 main.go:141] libmachine: (auto-171116)       <target dev='hda' bus='virtio'/>
	I1004 02:09:46.232973  172863 main.go:141] libmachine: (auto-171116)     </disk>
	I1004 02:09:46.232983  172863 main.go:141] libmachine: (auto-171116)     <interface type='network'>
	I1004 02:09:46.232997  172863 main.go:141] libmachine: (auto-171116)       <source network='mk-auto-171116'/>
	I1004 02:09:46.233009  172863 main.go:141] libmachine: (auto-171116)       <model type='virtio'/>
	I1004 02:09:46.233021  172863 main.go:141] libmachine: (auto-171116)     </interface>
	I1004 02:09:46.233034  172863 main.go:141] libmachine: (auto-171116)     <interface type='network'>
	I1004 02:09:46.233048  172863 main.go:141] libmachine: (auto-171116)       <source network='default'/>
	I1004 02:09:46.233060  172863 main.go:141] libmachine: (auto-171116)       <model type='virtio'/>
	I1004 02:09:46.233073  172863 main.go:141] libmachine: (auto-171116)     </interface>
	I1004 02:09:46.233081  172863 main.go:141] libmachine: (auto-171116)     <serial type='pty'>
	I1004 02:09:46.233088  172863 main.go:141] libmachine: (auto-171116)       <target port='0'/>
	I1004 02:09:46.233095  172863 main.go:141] libmachine: (auto-171116)     </serial>
	I1004 02:09:46.233101  172863 main.go:141] libmachine: (auto-171116)     <console type='pty'>
	I1004 02:09:46.233109  172863 main.go:141] libmachine: (auto-171116)       <target type='serial' port='0'/>
	I1004 02:09:46.233115  172863 main.go:141] libmachine: (auto-171116)     </console>
	I1004 02:09:46.233125  172863 main.go:141] libmachine: (auto-171116)     <rng model='virtio'>
	I1004 02:09:46.233141  172863 main.go:141] libmachine: (auto-171116)       <backend model='random'>/dev/random</backend>
	I1004 02:09:46.233149  172863 main.go:141] libmachine: (auto-171116)     </rng>
	I1004 02:09:46.233155  172863 main.go:141] libmachine: (auto-171116)     
	I1004 02:09:46.233162  172863 main.go:141] libmachine: (auto-171116)     
	I1004 02:09:46.233168  172863 main.go:141] libmachine: (auto-171116)   </devices>
	I1004 02:09:46.233175  172863 main.go:141] libmachine: (auto-171116) </domain>
	I1004 02:09:46.233187  172863 main.go:141] libmachine: (auto-171116) 
	I1004 02:09:46.237429  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:87:98:f5 in network default
	I1004 02:09:46.238068  172863 main.go:141] libmachine: (auto-171116) Ensuring networks are active...
	I1004 02:09:46.238104  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:46.238975  172863 main.go:141] libmachine: (auto-171116) Ensuring network default is active
	I1004 02:09:46.239282  172863 main.go:141] libmachine: (auto-171116) Ensuring network mk-auto-171116 is active
	I1004 02:09:46.239966  172863 main.go:141] libmachine: (auto-171116) Getting domain xml...
	I1004 02:09:46.240722  172863 main.go:141] libmachine: (auto-171116) Creating domain...
	I1004 02:09:47.556547  172863 main.go:141] libmachine: (auto-171116) Waiting to get IP...
	I1004 02:09:47.557249  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:47.557762  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:47.557788  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:47.557751  172886 retry.go:31] will retry after 192.013162ms: waiting for machine to come up
	I1004 02:09:47.751480  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:47.751980  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:47.752012  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:47.751920  172886 retry.go:31] will retry after 269.15387ms: waiting for machine to come up
	I1004 02:09:48.022358  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:48.022910  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:48.022935  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:48.022860  172886 retry.go:31] will retry after 467.674801ms: waiting for machine to come up
	I1004 02:09:48.492612  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:48.493189  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:48.493221  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:48.493124  172886 retry.go:31] will retry after 569.766047ms: waiting for machine to come up
	I1004 02:09:49.065017  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:49.065503  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:49.065529  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:49.065412  172886 retry.go:31] will retry after 499.67655ms: waiting for machine to come up
	I1004 02:09:49.567142  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:49.567622  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:49.567653  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:49.567565  172886 retry.go:31] will retry after 601.047995ms: waiting for machine to come up
	I1004 02:09:50.169707  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:50.170169  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:50.170202  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:50.170113  172886 retry.go:31] will retry after 979.277685ms: waiting for machine to come up
	I1004 02:09:51.151317  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:51.151805  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:51.151832  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:51.151766  172886 retry.go:31] will retry after 1.176780056s: waiting for machine to come up
	I1004 02:09:52.329899  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:52.330486  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:52.330520  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:52.330429  172886 retry.go:31] will retry after 1.655606024s: waiting for machine to come up
	I1004 02:09:53.988139  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:53.988575  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:53.988605  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:53.988518  172886 retry.go:31] will retry after 1.597592879s: waiting for machine to come up
	I1004 02:09:55.588332  172863 main.go:141] libmachine: (auto-171116) DBG | domain auto-171116 has defined MAC address 52:54:00:2d:7f:42 in network mk-auto-171116
	I1004 02:09:55.588840  172863 main.go:141] libmachine: (auto-171116) DBG | unable to find current IP address of domain auto-171116 in network mk-auto-171116
	I1004 02:09:55.588872  172863 main.go:141] libmachine: (auto-171116) DBG | I1004 02:09:55.588772  172886 retry.go:31] will retry after 2.464418279s: waiting for machine to come up
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:50:43 UTC, ends at Wed 2023-10-04 02:09:56 UTC. --
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.170322205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385396170300271,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a0c6b1e3-912b-41b9-b779-550b9afa0362 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.171051447Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=27723cdc-433a-408f-bd59-b0327216d9c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.171167936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=27723cdc-433a-408f-bd59-b0327216d9c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.171409578Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=27723cdc-433a-408f-bd59-b0327216d9c9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.221646894Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4ed896f1-4c7b-49ff-ad25-40bc1a395717 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.221739651Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4ed896f1-4c7b-49ff-ad25-40bc1a395717 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.223019329Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6887860e-2528-4058-9d8a-ee8cde93ffb4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.223493624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385396223467840,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=6887860e-2528-4058-9d8a-ee8cde93ffb4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.224541944Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0ec683dd-030c-435a-a2dd-e785c42ef79f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.224590131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0ec683dd-030c-435a-a2dd-e785c42ef79f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.224896932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0ec683dd-030c-435a-a2dd-e785c42ef79f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.276894718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=29832875-f21c-43a5-af02-747dab82b92a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.276983417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=29832875-f21c-43a5-af02-747dab82b92a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.278437814Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=478bf3fb-55a5-4328-ba34-7600f3eccc8d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.278753577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385396278740582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=478bf3fb-55a5-4328-ba34-7600f3eccc8d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.279609941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d1029c96-9c6b-49e5-8758-5a2ab5a2744b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.279690555Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d1029c96-9c6b-49e5-8758-5a2ab5a2744b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.279945083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d1029c96-9c6b-49e5-8758-5a2ab5a2744b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.322867225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a90dbd26-dfe7-4807-923d-53dbd98f689e name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.322952201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a90dbd26-dfe7-4807-923d-53dbd98f689e name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.326645614Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2a17f628-af6f-448f-9849-7c903436318e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.327031488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385396327018785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:93635,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=2a17f628-af6f-448f-9849-7c903436318e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.327879379Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=deeb6512-d567-4b40-9735-5f8d04c35bb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.327951921Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=deeb6512-d567-4b40-9735-5f8d04c35bb5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:56 no-preload-273516 crio[742]: time="2023-10-04 02:09:56.328241569Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:05942e8201b6de29162c9008ba0946da33a2d63df7a3a7d22641cef39242096b,PodSandboxId:3aa2bdd0ded788f956432a0be7ee7ca399462c6fd5a8388ed5239b1721b9ed59,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1696384301499727991,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16cc2d74-3565-4360-9899-bd029b8d2c9d,},Annotations:map[string]string{io.kubernetes.container.hash: 804e6fae,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9,PodSandboxId:909c342bcc02239dc3728f99c1deedf15ff78f2fd8a03ab6e2508c0f6f28d53b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1696384299507613655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-wkrdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0bc46efd-4d1e-4267-9992-d08e8dfe1e2c,},Annotations:map[string]string{io.kubernetes.container.hash: 41d36c1d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"}
,{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1696384292978814521,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-syste
m,io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475,PodSandboxId:644a85f7f3686eb4b88afe814843a6dce5db3943a618e2b08250ee9edc7bfa24,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_EXITED,CreatedAt:1696384291817320831,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 9ee57ba0-6b8f-48cc-afe0-e946ec97f879,},Annotations:map[string]string{io.kubernetes.container.hash: 5338f2be,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8,PodSandboxId:f15ee6807437406c6be380ba99d665b32bab728056acee534871de614c7dbf53,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a059eb62a2ce7a282b40d93886f941fdf9978ca4763a20b9660142c55d44f0dc,State:CONTAINER_RUNNING,CreatedAt:1696384291636264111,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-shlvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2a1c2fe3-420
9-406d-8e28-74d5c3148c6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1fa7f794,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92,PodSandboxId:b97bb39dd4ac41caceb7f0cd58cbe32e160bc350582222cdb04b9b36de27117b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:d3bcd2d91617f34df5881e71e4c5043f33de215942968493fd5e3c8c4a30e56e,State:CONTAINER_RUNNING,CreatedAt:1696384285698680892,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 614882f03fc6563cd52
4e3b9c43687b6,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb,PodSandboxId:2461a8f5daa2fd84c4ca2fc55d38c7e55e66f4de3d0ce874530eb4824ff2cfbd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1696384285323042791,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77fa706a0ad84a510da5d8d1ad33a325,},Annotations:map[string]string{io.ku
bernetes.container.hash: cd968dce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404,PodSandboxId:d8bfac8f3f87c568101c5f54d364658cf12a1095debb1d6e9232c926fc032932,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:14ba5a3244194bccfc270427eac90f1372947fced92f7be78f6c73d1bca1acc2,State:CONTAINER_RUNNING,CreatedAt:1696384285097050771,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d5fddeadc0131ddc8d9e3f74c1e41162,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 3147bbe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461,PodSandboxId:509ba2ebe4e93d9d60b9e1b7379de223e6908e6344428b24a0943069cbcbbfc7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:21f41bd2b1ac14ff168272c68167f634c569772c6a013e6d84d11a80c99d8d9b,State:CONTAINER_RUNNING,CreatedAt:1696384284849562094,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-273516,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 44d8f204be9b0d63cc7d39992bde49cd,},Annotations:map[strin
g]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=deeb6512-d567-4b40-9735-5f8d04c35bb5 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	05942e8201b6d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   3aa2bdd0ded78       busybox
	e3d59ec2af4e1       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      18 minutes ago      Running             coredns                   1                   909c342bcc022       coredns-5dd5756b68-wkrdx
	2c2e9a0977a2f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Running             storage-provisioner       3                   644a85f7f3686       storage-provisioner
	3baef608a9876       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       2                   644a85f7f3686       storage-provisioner
	b413622f7c392       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0                                      18 minutes ago      Running             kube-proxy                1                   f15ee68074374       kube-proxy-shlvt
	946ede03885c7       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8                                      18 minutes ago      Running             kube-scheduler            1                   b97bb39dd4ac4       kube-scheduler-no-preload-273516
	6e2ee480fbb80       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      18 minutes ago      Running             etcd                      1                   2461a8f5daa2f       etcd-no-preload-273516
	9ebf01da00b61       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce                                      18 minutes ago      Running             kube-apiserver            1                   d8bfac8f3f87c       kube-apiserver-no-preload-273516
	1406d9eca4647       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57                                      18 minutes ago      Running             kube-controller-manager   1                   509ba2ebe4e93       kube-controller-manager-no-preload-273516
	
	* 
	* ==> coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 347fb4f25cc546215231b2e9ef34a7838489408c50ad1d77e38b06de967dd388dc540a0db2692259640c7998323f3763426b7a7e73fad2aa89cebddf27cf7c94
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58935 - 27051 "HINFO IN 18115897949314560.2540196831787147618. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.022994197s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-273516
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-273516
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=no-preload-273516
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_41_30_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:41:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-273516
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 02:09:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:07:20 +0000   Wed, 04 Oct 2023 01:41:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:07:20 +0000   Wed, 04 Oct 2023 01:41:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:07:20 +0000   Wed, 04 Oct 2023 01:41:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:07:20 +0000   Wed, 04 Oct 2023 01:51:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.165
	  Hostname:    no-preload-273516
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 85b2abee83814eadafe0451f11a59a64
	  System UUID:                85b2abee-8381-4ead-afe0-451f11a59a64
	  Boot ID:                    cb041762-81b2-4e64-9de0-74cdaa7a20f4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-wkrdx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-273516                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-273516             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-273516    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-shlvt                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-273516             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-mmm7c              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x7 over 28m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x7 over 28m)  kubelet          Node no-preload-273516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-273516 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-273516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-273516 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                28m                kubelet          Node no-preload-273516 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-273516 event: Registered Node no-preload-273516 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-273516 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-273516 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-273516 event: Registered Node no-preload-273516 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.078638] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.910100] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.713338] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160693] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.446991] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.127295] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.119052] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.179735] systemd-fstab-generator[692]: Ignoring "noauto" for root device
	[  +0.149968] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +0.249311] systemd-fstab-generator[727]: Ignoring "noauto" for root device
	[Oct 4 01:51] systemd-fstab-generator[1253]: Ignoring "noauto" for root device
	[ +15.378079] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] <==
	* {"level":"info","ts":"2023-10-04T01:51:27.336645Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.83.165:2380"}
	{"level":"info","ts":"2023-10-04T01:51:29.090625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 is starting a new election at term 2"}
	{"level":"info","ts":"2023-10-04T01:51:29.0907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-10-04T01:51:29.090735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 received MsgPreVoteResp from 3152c33aadbaa9f5 at term 2"}
	{"level":"info","ts":"2023-10-04T01:51:29.090748Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 became candidate at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.090753Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 received MsgVoteResp from 3152c33aadbaa9f5 at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.090762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3152c33aadbaa9f5 became leader at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.090769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3152c33aadbaa9f5 elected leader 3152c33aadbaa9f5 at term 3"}
	{"level":"info","ts":"2023-10-04T01:51:29.093602Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:51:29.094604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-04T01:51:29.104397Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-04T01:51:29.105601Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.83.165:2379"}
	{"level":"info","ts":"2023-10-04T01:51:29.093546Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3152c33aadbaa9f5","local-member-attributes":"{Name:no-preload-273516 ClientURLs:[https://192.168.83.165:2379]}","request-path":"/0/members/3152c33aadbaa9f5/attributes","cluster-id":"7aac9845db42f04b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-04T01:51:29.112787Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-04T01:51:29.112838Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2023-10-04T01:58:23.457422Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.443909ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T01:58:23.457706Z","caller":"traceutil/trace.go:171","msg":"trace[1951961987] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:945; }","duration":"203.778645ms","start":"2023-10-04T01:58:23.253891Z","end":"2023-10-04T01:58:23.457669Z","steps":["trace[1951961987] 'range keys from in-memory index tree'  (duration: 203.337463ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T01:58:25.201468Z","caller":"traceutil/trace.go:171","msg":"trace[1359224245] transaction","detail":"{read_only:false; response_revision:946; number_of_response:1; }","duration":"360.658973ms","start":"2023-10-04T01:58:24.840782Z","end":"2023-10-04T01:58:25.201441Z","steps":["trace[1359224245] 'process raft request'  (duration: 360.424779ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T01:58:25.202806Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T01:58:24.840765Z","time spent":"361.010971ms","remote":"127.0.0.1:38836","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:945 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-10-04T02:01:29.12962Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":850}
	{"level":"info","ts":"2023-10-04T02:01:29.132936Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":850,"took":"2.94917ms","hash":435336003}
	{"level":"info","ts":"2023-10-04T02:01:29.133061Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435336003,"revision":850,"compact-revision":-1}
	{"level":"info","ts":"2023-10-04T02:06:29.138985Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1092}
	{"level":"info","ts":"2023-10-04T02:06:29.14076Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1092,"took":"1.316328ms","hash":1117565512}
	{"level":"info","ts":"2023-10-04T02:06:29.140799Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1117565512,"revision":1092,"compact-revision":850}
	
	* 
	* ==> kernel <==
	*  02:09:56 up 19 min,  0 users,  load average: 0.03, 0.14, 0.16
	Linux no-preload-273516 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] <==
	* I1004 02:06:30.761327       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:06:31.760829       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:06:31.760902       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:06:31.760914       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:06:31.760853       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:06:31.761029       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:06:31.762381       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:07:30.611864       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:07:31.761260       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:07:31.761445       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:07:31.761493       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:07:31.763615       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:07:31.763705       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:07:31.763713       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:08:30.612439       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 02:09:30.612928       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:09:31.762213       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:09:31.762321       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:09:31.762381       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:09:31.764676       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:09:31.764793       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:09:31.764828       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] <==
	* I1004 02:04:13.837858       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:04:43.319300       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:04:43.846597       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:05:13.328353       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:05:13.856415       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:05:43.335547       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:05:43.866695       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:06:13.342318       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:06:13.876385       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:06:43.348223       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:06:43.885543       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:07:13.356909       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:07:13.898550       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:07:40.913986       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="392.543µs"
	E1004 02:07:43.366328       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:07:43.909012       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:07:52.909311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="211.258µs"
	E1004 02:08:13.371982       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:08:13.917328       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:08:43.377861       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:08:43.925952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:09:13.385227       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:13.936076       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:09:43.392804       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:43.947415       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] <==
	* I1004 01:51:32.089640       1 server_others.go:69] "Using iptables proxy"
	I1004 01:51:32.099894       1 node.go:141] Successfully retrieved node IP: 192.168.83.165
	I1004 01:51:32.135904       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 01:51:32.135953       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 01:51:32.138824       1 server_others.go:152] "Using iptables Proxier"
	I1004 01:51:32.138889       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 01:51:32.139253       1 server.go:846] "Version info" version="v1.28.2"
	I1004 01:51:32.139291       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:51:32.140395       1 config.go:188] "Starting service config controller"
	I1004 01:51:32.140445       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 01:51:32.140468       1 config.go:97] "Starting endpoint slice config controller"
	I1004 01:51:32.140472       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 01:51:32.143264       1 config.go:315] "Starting node config controller"
	I1004 01:51:32.143301       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 01:51:32.241259       1 shared_informer.go:318] Caches are synced for service config
	I1004 01:51:32.241280       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1004 01:51:32.243833       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] <==
	* I1004 01:51:27.690663       1 serving.go:348] Generated self-signed cert in-memory
	I1004 01:51:30.801824       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.2"
	I1004 01:51:30.802015       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 01:51:30.824521       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1004 01:51:30.824631       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1004 01:51:30.825156       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1004 01:51:30.825217       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:51:30.825232       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1004 01:51:30.825237       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1004 01:51:30.830399       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1004 01:51:30.834011       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1004 01:51:30.927685       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1004 01:51:30.927825       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1004 01:51:30.927733       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:50:43 UTC, ends at Wed 2023-10-04 02:09:57 UTC. --
	Oct 04 02:07:24 no-preload-273516 kubelet[1259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:07:24 no-preload-273516 kubelet[1259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:07:25 no-preload-273516 kubelet[1259]: E1004 02:07:25.904842    1259 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 04 02:07:25 no-preload-273516 kubelet[1259]: E1004 02:07:25.904891    1259 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 04 02:07:25 no-preload-273516 kubelet[1259]: E1004 02:07:25.905604    1259 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dtdd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-mmm7c_kube-system(b0660d47-8147-4844-aa22-e8c4b4f40577): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:07:25 no-preload-273516 kubelet[1259]: E1004 02:07:25.905661    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:07:40 no-preload-273516 kubelet[1259]: E1004 02:07:40.893862    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:07:52 no-preload-273516 kubelet[1259]: E1004 02:07:52.894360    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:08:05 no-preload-273516 kubelet[1259]: E1004 02:08:05.894522    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:08:16 no-preload-273516 kubelet[1259]: E1004 02:08:16.893709    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:08:24 no-preload-273516 kubelet[1259]: E1004 02:08:24.023741    1259 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:08:24 no-preload-273516 kubelet[1259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:08:24 no-preload-273516 kubelet[1259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:08:24 no-preload-273516 kubelet[1259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:08:28 no-preload-273516 kubelet[1259]: E1004 02:08:28.893332    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:08:43 no-preload-273516 kubelet[1259]: E1004 02:08:43.894508    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:08:56 no-preload-273516 kubelet[1259]: E1004 02:08:56.894563    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:09:10 no-preload-273516 kubelet[1259]: E1004 02:09:10.893812    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:09:21 no-preload-273516 kubelet[1259]: E1004 02:09:21.893081    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:09:24 no-preload-273516 kubelet[1259]: E1004 02:09:24.024008    1259 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:09:24 no-preload-273516 kubelet[1259]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:09:24 no-preload-273516 kubelet[1259]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:09:24 no-preload-273516 kubelet[1259]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:09:33 no-preload-273516 kubelet[1259]: E1004 02:09:33.893573    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	Oct 04 02:09:45 no-preload-273516 kubelet[1259]: E1004 02:09:45.893607    1259 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-mmm7c" podUID="b0660d47-8147-4844-aa22-e8c4b4f40577"
	
	* 
	* ==> storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] <==
	* I1004 01:51:33.111627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:51:33.120908       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:51:33.120980       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 01:51:50.526767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 01:51:50.526949       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"90e9a149-c8f5-4f3b-b586-6091789b0f8d", APIVersion:"v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-273516_4abc10b7-cf2e-4544-a65c-baf8f75b67fa became leader
	I1004 01:51:50.527672       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-273516_4abc10b7-cf2e-4544-a65c-baf8f75b67fa!
	I1004 01:51:50.630509       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-273516_4abc10b7-cf2e-4544-a65c-baf8f75b67fa!
	
	* 
	* ==> storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] <==
	* I1004 01:51:32.041028       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1004 01:51:32.055412       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-273516 -n no-preload-273516
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-273516 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-mmm7c
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-273516 describe pod metrics-server-57f55c9bc5-mmm7c
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-273516 describe pod metrics-server-57f55c9bc5-mmm7c: exit status 1 (84.874346ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-mmm7c" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-273516 describe pod metrics-server-57f55c9bc5-mmm7c: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (296.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (184.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1004 02:06:56.339528  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 02:08:15.375109  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 02:09:38.427075  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107182 -n old-k8s-version-107182
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:09:42.398935713 +0000 UTC m=+5174.769966736
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-107182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-107182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.959µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-107182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-107182 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-107182 logs -n 25: (1.319216201s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p old-k8s-version-107182        | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-528457                              | cert-expiration-528457       | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	| delete  | -p                                                     | disable-driver-mounts-554732 | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:42 UTC |
	|         | disable-driver-mounts-554732                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:42 UTC | 04 Oct 23 01:43 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-487861             | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:43 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-487861                  | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-487861 --memory=2200 --alsologtostderr   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-273516                  | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-273516                                   | no-preload-273516            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-487861 sudo                              | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-509298                 | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| addons  | enable dashboard -p old-k8s-version-107182             | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-487861                                   | newest-cni-487861            | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:44 UTC |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:44 UTC | 04 Oct 23 01:50 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-509298                                  | embed-certs-509298           | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-107182                              | old-k8s-version-107182       | jenkins | v1.31.2 | 04 Oct 23 01:45 UTC | 04 Oct 23 01:57 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-239802  | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC | 04 Oct 23 01:50 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:50 UTC |                     |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-239802       | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-239802 | jenkins | v1.31.2 | 04 Oct 23 01:53 UTC | 04 Oct 23 02:03 UTC |
	|         | default-k8s-diff-port-239802                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 01:53:11
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 01:53:11.828274  169515 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:53:11.828536  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828547  169515 out.go:309] Setting ErrFile to fd 2...
	I1004 01:53:11.828552  169515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:53:11.828768  169515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:53:11.829347  169515 out.go:303] Setting JSON to false
	I1004 01:53:11.830376  169515 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":9343,"bootTime":1696375049,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:53:11.830441  169515 start.go:138] virtualization: kvm guest
	I1004 01:53:11.832711  169515 out.go:177] * [default-k8s-diff-port-239802] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:53:11.834324  169515 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:53:11.835643  169515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:53:11.834361  169515 notify.go:220] Checking for updates...
	I1004 01:53:11.838217  169515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:53:11.839555  169515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:53:11.840846  169515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:53:11.842161  169515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:53:07.280681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:09.778282  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.779681  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.843761  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:53:11.844277  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.844360  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.860250  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45079
	I1004 01:53:11.860700  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.861256  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.861279  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.861643  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.861866  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.862175  169515 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:53:11.862447  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.862487  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.877262  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1004 01:53:11.877711  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.878333  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.878357  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.878806  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.879014  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.917299  169515 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 01:53:11.918706  169515 start.go:298] selected driver: kvm2
	I1004 01:53:11.918721  169515 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.918831  169515 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:53:11.919435  169515 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.919506  169515 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 01:53:11.934986  169515 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 01:53:11.935329  169515 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 01:53:11.935365  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:53:11.935379  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:53:11.935399  169515 start_flags.go:321] config:
	{Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-23980
2 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:53:11.935580  169515 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 01:53:11.937595  169515 out.go:177] * Starting control plane node default-k8s-diff-port-239802 in cluster default-k8s-diff-port-239802
	I1004 01:53:11.938856  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:53:11.938906  169515 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 01:53:11.938918  169515 cache.go:57] Caching tarball of preloaded images
	I1004 01:53:11.939005  169515 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 01:53:11.939019  169515 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 01:53:11.939123  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:53:11.939343  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:53:11.939424  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 58.221µs
	I1004 01:53:11.939444  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:53:11.939453  169515 fix.go:54] fixHost starting: 
	I1004 01:53:11.939742  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:53:11.939789  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:53:11.954196  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40783
	I1004 01:53:11.954631  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:53:11.955177  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:53:11.955207  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:53:11.955546  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:53:11.955732  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.955907  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:53:11.957727  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Running err=<nil>
	W1004 01:53:11.957752  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:53:11.959786  169515 out.go:177] * Updating the running kvm2 "default-k8s-diff-port-239802" VM ...
	I1004 01:53:08.669530  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.168697  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:10.723754  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.223290  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:11.960962  169515 machine.go:88] provisioning docker machine ...
	I1004 01:53:11.960980  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:53:11.961165  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961309  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:53:11.961321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:53:11.961451  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:53:11.964100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964548  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:49:35 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:53:11.964579  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:53:11.964700  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:53:11.964891  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:53:11.965213  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:53:11.965415  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:53:11.965918  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:53:11.965942  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:53:14.858205  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:13.780979  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:16.279971  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:13.170120  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.170376  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:15.724119  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:18.223219  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.930132  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:18.779188  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:17.668906  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:19.669782  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:22.169918  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:20.724642  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:23.225475  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.010157  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:23.279668  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.778425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:24.668233  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:26.669315  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:25.723231  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:28.222973  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:27.082190  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:27.778573  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.779483  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:29.168734  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:31.169219  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:30.223870  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:32.724030  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.162101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:36.234078  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:32.278768  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:34.279611  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:36.779455  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:33.669109  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.669923  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:35.224564  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.723997  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:39.724578  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:38.779567  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:41.278736  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:37.671432  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:40.168863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.168970  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:42.223844  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.224215  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.358317  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:43.278799  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:45.279544  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:44.169371  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.670033  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:46.726544  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.222631  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.426196  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:47.282389  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:49.779291  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:48.673161  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.170963  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:51.223796  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.724046  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.506087  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:52.280232  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:54.778941  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:53.668512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:55.668997  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:56.223812  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.223985  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:57.578187  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:53:57.281468  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:59.780369  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:53:58.169361  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.171086  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:00.723767  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.724182  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:03.658082  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:06.730171  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:02.278547  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:04.279504  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:06.779458  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:02.669174  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.169089  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:05.224336  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.724614  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:08.780155  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:11.281399  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:07.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:09.670536  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.170645  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:10.223678  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.724096  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:12.810084  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:15.882179  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:13.780199  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.280077  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:14.668216  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:16.668736  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:15.223755  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:17.223789  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:19.724040  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.780554  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.283185  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:18.672583  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.169626  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:22.223220  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:24.223653  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:21.962094  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:25.034104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:23.779529  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:25.785001  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:23.668523  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.170080  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:26.725426  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:29.224292  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.114102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:28.278824  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.280812  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:28.668973  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:30.669813  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:31.724077  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.223673  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.186185  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:32.283313  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:34.785440  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:33.169511  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:35.170079  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:36.223744  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:38.223824  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.270113  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:37.279625  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:39.779646  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:37.670022  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.170303  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:40.723833  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.723858  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.723974  167452 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:43.338083  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:42.281698  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.778204  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.779425  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:42.668686  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:44.671405  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:47.170837  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:46.418200  167452 pod_ready.go:81] duration metric: took 4m0.000746433s waiting for pod "metrics-server-57f55c9bc5-ndfck" in "kube-system" namespace to be "Ready" ...
	E1004 01:54:46.418242  167452 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:54:46.418266  167452 pod_ready.go:38] duration metric: took 4m6.792871015s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:54:46.418310  167452 kubeadm.go:640] restartCluster took 4m30.137827083s
	W1004 01:54:46.418446  167452 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:54:46.418484  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:54:49.418125  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:48.780239  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.284905  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:49.174919  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:51.675479  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:52.490104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:53.778907  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:55.778958  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:54.169521  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:56.670982  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:58.570115  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:01.642220  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:54:57.779481  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.782476  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:54:59.170012  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:01.670386  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:00.372786  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.954218871s)
	I1004 01:55:00.372881  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:00.387256  167452 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:55:00.396756  167452 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:55:00.406765  167452 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:55:00.406806  167452 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 01:55:00.625971  167452 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:55:02.279852  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.281525  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.779641  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:04.170863  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:06.671473  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:07.722109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:10.794061  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:08.780879  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.283040  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.183572  167452 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 01:55:12.183661  167452 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:55:12.183766  167452 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:55:12.183877  167452 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:55:12.183978  167452 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:55:12.184074  167452 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:55:12.185782  167452 out.go:204]   - Generating certificates and keys ...
	I1004 01:55:12.185896  167452 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:55:12.185952  167452 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:55:12.186040  167452 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:55:12.186118  167452 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:55:12.186210  167452 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:55:12.186309  167452 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:55:12.186400  167452 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:55:12.186483  167452 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:55:12.186608  167452 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:55:12.186728  167452 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:55:12.186790  167452 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:55:12.186869  167452 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:55:12.186944  167452 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:55:12.187022  167452 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:55:12.187094  167452 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:55:12.187174  167452 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:55:12.187302  167452 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:55:12.187369  167452 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:55:12.188941  167452 out.go:204]   - Booting up control plane ...
	I1004 01:55:12.189059  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:55:12.189132  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:55:12.189211  167452 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:55:12.189324  167452 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:55:12.189452  167452 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:55:12.189504  167452 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 01:55:12.189735  167452 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:55:12.189877  167452 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004191 seconds
	I1004 01:55:12.190030  167452 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:55:12.190218  167452 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:55:12.190314  167452 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:55:12.190566  167452 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-509298 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 01:55:12.190670  167452 kubeadm.go:322] [bootstrap-token] Using token: i6ebw8.csx7j4uz10ltteg7
	I1004 01:55:12.192239  167452 out.go:204]   - Configuring RBAC rules ...
	I1004 01:55:12.192387  167452 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:55:12.192462  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 01:55:12.192608  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:55:12.192774  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:55:12.192904  167452 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:55:12.192996  167452 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:55:12.193138  167452 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 01:55:12.193211  167452 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:55:12.193271  167452 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:55:12.193278  167452 kubeadm.go:322] 
	I1004 01:55:12.193325  167452 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:55:12.193332  167452 kubeadm.go:322] 
	I1004 01:55:12.193398  167452 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:55:12.193404  167452 kubeadm.go:322] 
	I1004 01:55:12.193424  167452 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:55:12.193475  167452 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:55:12.193517  167452 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:55:12.193523  167452 kubeadm.go:322] 
	I1004 01:55:12.193565  167452 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 01:55:12.193571  167452 kubeadm.go:322] 
	I1004 01:55:12.193628  167452 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 01:55:12.193638  167452 kubeadm.go:322] 
	I1004 01:55:12.193704  167452 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:55:12.193783  167452 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:55:12.193895  167452 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:55:12.193906  167452 kubeadm.go:322] 
	I1004 01:55:12.194003  167452 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 01:55:12.194073  167452 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:55:12.194080  167452 kubeadm.go:322] 
	I1004 01:55:12.194169  167452 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194254  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:55:12.194273  167452 kubeadm.go:322] 	--control-plane 
	I1004 01:55:12.194279  167452 kubeadm.go:322] 
	I1004 01:55:12.194352  167452 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:55:12.194360  167452 kubeadm.go:322] 
	I1004 01:55:12.194428  167452 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token i6ebw8.csx7j4uz10ltteg7 \
	I1004 01:55:12.194540  167452 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:55:12.194563  167452 cni.go:84] Creating CNI manager for ""
	I1004 01:55:12.194572  167452 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:55:12.196296  167452 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:55:09.172018  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:11.670011  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:12.197574  167452 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:55:12.219217  167452 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:55:12.298578  167452 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:55:12.298671  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.298685  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=embed-certs-509298 minikube.k8s.io/updated_at=2023_10_04T01_55_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.379573  167452 ops.go:34] apiserver oom_adj: -16
	I1004 01:55:12.664606  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:12.821682  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.427770  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.928385  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.428534  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:14.927827  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:13.780253  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.286195  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:14.169232  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:16.669256  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:15.428102  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:15.928404  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.428316  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.928095  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.428581  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:17.928158  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.428061  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:18.927815  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.428285  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:19.927597  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:16.874102  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:19.946137  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:18.779212  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.780120  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:18.671773  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:21.169373  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:20.428231  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:20.927662  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.427644  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:21.927803  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.427969  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:22.928321  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.428088  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:23.928382  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.427968  167452 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:55:24.686625  167452 kubeadm.go:1081] duration metric: took 12.388021854s to wait for elevateKubeSystemPrivileges.
	I1004 01:55:24.686650  167452 kubeadm.go:406] StartCluster complete in 5m8.467148399s
	I1004 01:55:24.686670  167452 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.686772  167452 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:55:24.689005  167452 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:55:24.691164  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:55:24.691505  167452 config.go:182] Loaded profile config "embed-certs-509298": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:55:24.691524  167452 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:55:24.691609  167452 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-509298"
	I1004 01:55:24.691645  167452 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-509298"
	W1004 01:55:24.691666  167452 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:55:24.691681  167452 addons.go:69] Setting default-storageclass=true in profile "embed-certs-509298"
	I1004 01:55:24.691711  167452 addons.go:69] Setting metrics-server=true in profile "embed-certs-509298"
	I1004 01:55:24.691721  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.691750  167452 addons.go:231] Setting addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:24.691713  167452 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-509298"
	W1004 01:55:24.691763  167452 addons.go:240] addon metrics-server should already be in state true
	I1004 01:55:24.692075  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692423  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692471  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692522  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.692566  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.692591  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.710712  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I1004 01:55:24.711360  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.711863  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34459
	I1004 01:55:24.712115  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712145  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.712236  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.712668  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.712925  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.712950  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.713327  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.713364  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713391  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.713880  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.713918  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.715208  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33099
	I1004 01:55:24.715594  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.716155  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.716185  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.716523  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.716732  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.720408  167452 addons.go:231] Setting addon default-storageclass=true in "embed-certs-509298"
	W1004 01:55:24.720590  167452 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:55:24.720630  167452 host.go:66] Checking if "embed-certs-509298" exists ...
	I1004 01:55:24.720922  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.720963  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.731384  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I1004 01:55:24.732142  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.732918  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.732946  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.733348  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.733666  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38909
	I1004 01:55:24.733699  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.734163  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.734711  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.734737  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.735163  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.735400  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.735991  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.738353  167452 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:55:24.740203  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:55:24.740222  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:55:24.737643  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.740244  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.742072  167452 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:55:24.743597  167452 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:24.743626  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:55:24.743648  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.744536  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745006  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.745048  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.745279  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.745519  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.745719  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.745878  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.748789  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.748842  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I1004 01:55:24.749267  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.749298  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.749354  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.749818  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.749892  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.749978  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.750177  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.750270  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.750325  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.750752  167452 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:55:24.750802  167452 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:55:24.751018  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.768787  167452 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I1004 01:55:24.769394  167452 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:55:24.770412  167452 main.go:141] libmachine: Using API Version  1
	I1004 01:55:24.770438  167452 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:55:24.770803  167452 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:55:24.770982  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetState
	I1004 01:55:24.772831  167452 main.go:141] libmachine: (embed-certs-509298) Calling .DriverName
	I1004 01:55:24.773101  167452 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.773120  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:55:24.773138  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHHostname
	I1004 01:55:24.776980  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777337  167452 main.go:141] libmachine: (embed-certs-509298) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1d:84:13", ip: ""} in network mk-embed-certs-509298: {Iface:virbr4 ExpiryTime:2023-10-04 02:41:32 +0000 UTC Type:0 Mac:52:54:00:1d:84:13 Iaid: IPaddr:192.168.50.170 Prefix:24 Hostname:embed-certs-509298 Clientid:01:52:54:00:1d:84:13}
	I1004 01:55:24.777390  167452 main.go:141] libmachine: (embed-certs-509298) DBG | domain embed-certs-509298 has defined IP address 192.168.50.170 and MAC address 52:54:00:1d:84:13 in network mk-embed-certs-509298
	I1004 01:55:24.777623  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHPort
	I1004 01:55:24.777827  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHKeyPath
	I1004 01:55:24.778030  167452 main.go:141] libmachine: (embed-certs-509298) Calling .GetSSHUsername
	I1004 01:55:24.778218  167452 sshutil.go:53] new ssh client: &{IP:192.168.50.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/embed-certs-509298/id_rsa Username:docker}
	I1004 01:55:24.827144  167452 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-509298" context rescaled to 1 replicas
	I1004 01:55:24.827188  167452 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.170 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:55:24.829039  167452 out.go:177] * Verifying Kubernetes components...
	I1004 01:55:24.830422  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:24.912112  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:55:24.912145  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:55:24.941943  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:55:24.953635  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:55:24.953669  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:55:24.964038  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:55:25.010973  167452 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.011004  167452 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:55:25.069236  167452 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:55:25.073447  167452 node_ready.go:35] waiting up to 6m0s for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.073533  167452 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:55:26.026178  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:23.280683  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.280934  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.276517  167452 node_ready.go:49] node "embed-certs-509298" has status "Ready":"True"
	I1004 01:55:25.276548  167452 node_ready.go:38] duration metric: took 203.068295ms waiting for node "embed-certs-509298" to be "Ready" ...
	I1004 01:55:25.276561  167452 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:25.459727  167452 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:26.648518  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.706528042s)
	I1004 01:55:26.648633  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.648655  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.648984  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649002  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.649012  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.649021  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.649326  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:26.649367  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.649378  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:26.670495  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:26.670520  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:26.670831  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:26.670890  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318331  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.35425456s)
	I1004 01:55:27.318392  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318407  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318442  167452 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.249161738s)
	I1004 01:55:27.318496  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318502  167452 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.244935012s)
	I1004 01:55:27.318516  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318526  167452 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 01:55:27.318839  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318886  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.318904  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318915  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318934  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318944  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.318946  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.318966  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.318980  167452 main.go:141] libmachine: Making call to close driver server
	I1004 01:55:27.318993  167452 main.go:141] libmachine: (embed-certs-509298) Calling .Close
	I1004 01:55:27.319203  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319225  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319232  167452 main.go:141] libmachine: (embed-certs-509298) DBG | Closing plugin on server side
	I1004 01:55:27.319242  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.319257  167452 addons.go:467] Verifying addon metrics-server=true in "embed-certs-509298"
	I1004 01:55:27.319290  167452 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:55:27.319300  167452 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:55:27.321408  167452 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I1004 01:55:23.171045  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:25.171137  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.323360  167452 addons.go:502] enable addons completed in 2.631835233s: enabled=[default-storageclass metrics-server storage-provisioner]
	I1004 01:55:27.504611  167452 pod_ready.go:102] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:28.987732  167452 pod_ready.go:92] pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.987757  167452 pod_ready.go:81] duration metric: took 3.527990687s waiting for pod "coredns-5dd5756b68-79qrq" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.987769  167452 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993933  167452 pod_ready.go:92] pod "etcd-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:28.993953  167452 pod_ready.go:81] duration metric: took 6.17579ms waiting for pod "etcd-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:28.993966  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000725  167452 pod_ready.go:92] pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.000747  167452 pod_ready.go:81] duration metric: took 6.77205ms waiting for pod "kube-apiserver-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.000759  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005757  167452 pod_ready.go:92] pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.005779  167452 pod_ready.go:81] duration metric: took 5.011182ms waiting for pod "kube-controller-manager-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.005790  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010519  167452 pod_ready.go:92] pod "kube-proxy-f99th" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.010537  167452 pod_ready.go:81] duration metric: took 4.738537ms waiting for pod "kube-proxy-f99th" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.010548  167452 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383772  167452 pod_ready.go:92] pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace has status "Ready":"True"
	I1004 01:55:29.383795  167452 pod_ready.go:81] duration metric: took 373.240101ms waiting for pod "kube-scheduler-embed-certs-509298" in "kube-system" namespace to be "Ready" ...
	I1004 01:55:29.383803  167452 pod_ready.go:38] duration metric: took 4.107228637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:29.383834  167452 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:29.383882  167452 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:29.399227  167452 api_server.go:72] duration metric: took 4.572006648s to wait for apiserver process to appear ...
	I1004 01:55:29.399259  167452 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:29.399279  167452 api_server.go:253] Checking apiserver healthz at https://192.168.50.170:8443/healthz ...
	I1004 01:55:29.405336  167452 api_server.go:279] https://192.168.50.170:8443/healthz returned 200:
	ok
	I1004 01:55:29.406768  167452 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:29.406794  167452 api_server.go:131] duration metric: took 7.526875ms to wait for apiserver health ...
	I1004 01:55:29.406804  167452 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:29.586194  167452 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:29.586225  167452 system_pods.go:61] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.586230  167452 system_pods.go:61] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.586236  167452 system_pods.go:61] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.586241  167452 system_pods.go:61] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.586248  167452 system_pods.go:61] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.586253  167452 system_pods.go:61] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.586261  167452 system_pods.go:61] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.586269  167452 system_pods.go:61] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.586276  167452 system_pods.go:74] duration metric: took 179.466307ms to wait for pod list to return data ...
	I1004 01:55:29.586289  167452 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:29.782372  167452 default_sa.go:45] found service account: "default"
	I1004 01:55:29.782395  167452 default_sa.go:55] duration metric: took 196.098004ms for default service account to be created ...
	I1004 01:55:29.782403  167452 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:29.988230  167452 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:29.988261  167452 system_pods.go:89] "coredns-5dd5756b68-79qrq" [0bbb5cfe-1fbf-426a-9866-0d5ce92e0519] Running
	I1004 01:55:29.988267  167452 system_pods.go:89] "etcd-embed-certs-509298" [d295a50a-facc-4682-a79b-b8df86427149] Running
	I1004 01:55:29.988271  167452 system_pods.go:89] "kube-apiserver-embed-certs-509298" [00c025b9-c89c-452f-84ea-f5f01011aec5] Running
	I1004 01:55:29.988276  167452 system_pods.go:89] "kube-controller-manager-embed-certs-509298" [c90175de-b742-4817-8ec6-da4f6055d65e] Running
	I1004 01:55:29.988281  167452 system_pods.go:89] "kube-proxy-f99th" [984b2db7-6f82-45db-888f-da52230d1bc5] Running
	I1004 01:55:29.988285  167452 system_pods.go:89] "kube-scheduler-embed-certs-509298" [765f21f1-6ec9-41dc-a067-c132d1b30d6c] Running
	I1004 01:55:29.988298  167452 system_pods.go:89] "metrics-server-57f55c9bc5-27696" [3beb8c73-4e3b-45a8-a4c8-39b7c2da4cb1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:29.988305  167452 system_pods.go:89] "storage-provisioner" [c1d1d8ba-3421-4e49-9138-9efdd0392e83] Running
	I1004 01:55:29.988313  167452 system_pods.go:126] duration metric: took 205.9045ms to wait for k8s-apps to be running ...
	I1004 01:55:29.988323  167452 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:29.988369  167452 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:30.003487  167452 system_svc.go:56] duration metric: took 15.153598ms WaitForService to wait for kubelet.
	I1004 01:55:30.003513  167452 kubeadm.go:581] duration metric: took 5.176299768s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:30.003534  167452 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:30.184152  167452 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:30.184177  167452 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:30.184186  167452 node_conditions.go:105] duration metric: took 180.648418ms to run NodePressure ...
	I1004 01:55:30.184198  167452 start.go:228] waiting for startup goroutines ...
	I1004 01:55:30.184204  167452 start.go:233] waiting for cluster config update ...
	I1004 01:55:30.184213  167452 start.go:242] writing updated cluster config ...
	I1004 01:55:30.184486  167452 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:30.233803  167452 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:30.235636  167452 out.go:177] * Done! kubectl is now configured to use "embed-certs-509298" cluster and "default" namespace by default
	I1004 01:55:29.098156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:27.779362  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.779502  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:31.781186  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:27.670021  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:29.678512  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:32.172222  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:35.178103  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:34.279433  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:36.781532  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:34.669275  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:37.170113  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:38.254127  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:39.278584  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.279085  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:39.668721  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:41.670095  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:44.330119  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:43.780710  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:45.782354  166755 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.472905  166755 pod_ready.go:81] duration metric: took 4m0.000518679s waiting for pod "metrics-server-57f55c9bc5-mmm7c" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:46.472936  166755 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:46.472946  166755 pod_ready.go:38] duration metric: took 4m5.201194434s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:46.472975  166755 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:55:46.473020  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:46.473075  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:46.533201  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:46.533233  166755 cri.go:89] found id: ""
	I1004 01:55:46.533243  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:46.533304  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.538613  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:46.538673  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:46.580801  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:46.580826  166755 cri.go:89] found id: ""
	I1004 01:55:46.580834  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:46.580896  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.586423  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:46.586510  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:46.645487  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:46.645526  166755 cri.go:89] found id: ""
	I1004 01:55:46.645535  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:46.645618  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.650643  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:46.650719  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:46.693457  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:46.693482  166755 cri.go:89] found id: ""
	I1004 01:55:46.693492  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:46.693553  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.698463  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:46.698538  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:46.744251  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:46.744279  166755 cri.go:89] found id: ""
	I1004 01:55:46.744289  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:46.744353  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.749343  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:46.749419  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:46.792717  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:46.792745  166755 cri.go:89] found id: ""
	I1004 01:55:46.792755  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:46.792820  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.797417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:46.797492  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:46.843004  166755 cri.go:89] found id: ""
	I1004 01:55:46.843033  166755 logs.go:284] 0 containers: []
	W1004 01:55:46.843044  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:46.843051  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:46.843114  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:44.169475  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:46.171848  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:47.402086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:46.883372  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:46.883397  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.883405  166755 cri.go:89] found id: ""
	I1004 01:55:46.883415  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:46.883476  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.888350  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:46.892981  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:46.893010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:46.936801  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:46.936829  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:46.983092  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:46.983124  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:46.997604  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:46.997634  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:47.041461  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:47.041500  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:47.098192  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:47.098234  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:47.139982  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:47.140010  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:47.184753  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:47.184789  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:47.242417  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:47.242456  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:47.290664  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:47.290696  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:47.332998  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:47.333035  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:47.779448  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:47.779490  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:47.951031  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:47.951067  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.505155  166755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:55:50.522774  166755 api_server.go:72] duration metric: took 4m16.635946913s to wait for apiserver process to appear ...
	I1004 01:55:50.522804  166755 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:55:50.522848  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:50.522929  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:50.565196  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:50.565220  166755 cri.go:89] found id: ""
	I1004 01:55:50.565232  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:50.565288  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.569426  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:50.569488  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:50.608113  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:50.608138  166755 cri.go:89] found id: ""
	I1004 01:55:50.608147  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:50.608194  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.612671  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:50.612730  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:50.659777  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.659806  166755 cri.go:89] found id: ""
	I1004 01:55:50.659817  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:50.659888  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.664188  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:50.664260  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:50.709318  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:50.709346  166755 cri.go:89] found id: ""
	I1004 01:55:50.709358  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:50.709422  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.713604  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:50.713674  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:50.757565  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:50.757597  166755 cri.go:89] found id: ""
	I1004 01:55:50.757607  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:50.757666  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.761646  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:50.761711  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:50.802683  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:50.802712  166755 cri.go:89] found id: ""
	I1004 01:55:50.802722  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:50.802785  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.807369  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:50.807443  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:50.849917  166755 cri.go:89] found id: ""
	I1004 01:55:50.849952  166755 logs.go:284] 0 containers: []
	W1004 01:55:50.849965  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:50.849974  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:50.850042  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:50.889329  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:50.889353  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:50.889357  166755 cri.go:89] found id: ""
	I1004 01:55:50.889365  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:50.889489  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.894295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:50.898319  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:50.898345  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:50.950303  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:50.950339  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:50.989731  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:50.989767  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:51.036483  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:51.036526  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:51.094053  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:51.094109  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:51.234887  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:51.234922  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:51.283233  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:51.283276  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:51.340569  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:51.340610  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:51.751585  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:51.751629  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:51.765404  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:51.765446  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:51.813579  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:51.813611  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:51.853408  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:51.853458  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:48.670114  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:51.169274  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:53.482075  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:56.554101  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:55:51.899649  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:51.899686  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.447493  166755 api_server.go:253] Checking apiserver healthz at https://192.168.83.165:8443/healthz ...
	I1004 01:55:54.453104  166755 api_server.go:279] https://192.168.83.165:8443/healthz returned 200:
	ok
	I1004 01:55:54.455299  166755 api_server.go:141] control plane version: v1.28.2
	I1004 01:55:54.455327  166755 api_server.go:131] duration metric: took 3.932514868s to wait for apiserver health ...
	I1004 01:55:54.455338  166755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:55:54.455368  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1004 01:55:54.455431  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1004 01:55:54.501159  166755 cri.go:89] found id: "9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:54.501180  166755 cri.go:89] found id: ""
	I1004 01:55:54.501188  166755 logs.go:284] 1 containers: [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404]
	I1004 01:55:54.501250  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.506342  166755 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1004 01:55:54.506418  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1004 01:55:54.548780  166755 cri.go:89] found id: "6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:54.548801  166755 cri.go:89] found id: ""
	I1004 01:55:54.548808  166755 logs.go:284] 1 containers: [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb]
	I1004 01:55:54.548863  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.560318  166755 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1004 01:55:54.560397  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1004 01:55:54.606477  166755 cri.go:89] found id: "e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:54.606509  166755 cri.go:89] found id: ""
	I1004 01:55:54.606521  166755 logs.go:284] 1 containers: [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9]
	I1004 01:55:54.606581  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.611004  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1004 01:55:54.611069  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1004 01:55:54.657003  166755 cri.go:89] found id: "946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:54.657031  166755 cri.go:89] found id: ""
	I1004 01:55:54.657041  166755 logs.go:284] 1 containers: [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92]
	I1004 01:55:54.657106  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.661386  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1004 01:55:54.661459  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1004 01:55:54.713209  166755 cri.go:89] found id: "b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:54.713237  166755 cri.go:89] found id: ""
	I1004 01:55:54.713246  166755 logs.go:284] 1 containers: [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8]
	I1004 01:55:54.713295  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.718417  166755 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1004 01:55:54.718489  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1004 01:55:54.767945  166755 cri.go:89] found id: "1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:54.767969  166755 cri.go:89] found id: ""
	I1004 01:55:54.767979  166755 logs.go:284] 1 containers: [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461]
	I1004 01:55:54.768040  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.772488  166755 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1004 01:55:54.772576  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1004 01:55:54.823905  166755 cri.go:89] found id: ""
	I1004 01:55:54.823935  166755 logs.go:284] 0 containers: []
	W1004 01:55:54.823945  166755 logs.go:286] No container was found matching "kindnet"
	I1004 01:55:54.823954  166755 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1004 01:55:54.824017  166755 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1004 01:55:54.878037  166755 cri.go:89] found id: "2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:54.878069  166755 cri.go:89] found id: "3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:54.878076  166755 cri.go:89] found id: ""
	I1004 01:55:54.878086  166755 logs.go:284] 2 containers: [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475]
	I1004 01:55:54.878146  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.883456  166755 ssh_runner.go:195] Run: which crictl
	I1004 01:55:54.887685  166755 logs.go:123] Gathering logs for describe nodes ...
	I1004 01:55:54.887708  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1004 01:55:55.021714  166755 logs.go:123] Gathering logs for coredns [e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9] ...
	I1004 01:55:55.021761  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3d59ec2af4e11a5b4995437f2a389bbeef1b201e998c7cb481967032f341ec9"
	I1004 01:55:55.066557  166755 logs.go:123] Gathering logs for kubelet ...
	I1004 01:55:55.066595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1004 01:55:55.125278  166755 logs.go:123] Gathering logs for etcd [6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb] ...
	I1004 01:55:55.125336  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e2ee480fbb8026da32339c0a1e7bbcd9559914c5b297332ad7e19dfc8d938fb"
	I1004 01:55:55.170570  166755 logs.go:123] Gathering logs for storage-provisioner [3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475] ...
	I1004 01:55:55.170607  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3baef608a9876ab4e59b4d25cea6acb6e2779b32b3d3233c96b8884c3e853475"
	I1004 01:55:55.212833  166755 logs.go:123] Gathering logs for CRI-O ...
	I1004 01:55:55.212866  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1004 01:55:55.552035  166755 logs.go:123] Gathering logs for container status ...
	I1004 01:55:55.552080  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1004 01:55:55.601698  166755 logs.go:123] Gathering logs for kube-apiserver [9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404] ...
	I1004 01:55:55.601738  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ebf01da00b61a43715a8114138f226ff098e4048b900d01b419e930aadac404"
	I1004 01:55:55.662745  166755 logs.go:123] Gathering logs for kube-proxy [b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8] ...
	I1004 01:55:55.662786  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b413622f7c3929146c8e1bdacb9880ff1eefff85b4205c538b51f34caf0ab4b8"
	I1004 01:55:55.707632  166755 logs.go:123] Gathering logs for kube-scheduler [946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92] ...
	I1004 01:55:55.707665  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 946ede03885c7e873882c295900febdf6bfb8cc629cb6773a7fed287af843d92"
	I1004 01:55:55.746461  166755 logs.go:123] Gathering logs for kube-controller-manager [1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461] ...
	I1004 01:55:55.746489  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1406d9eca4647bf3d9e916368dfc2d01c2ce5eb5cef6d8406a20d1876f0a7461"
	I1004 01:55:55.809111  166755 logs.go:123] Gathering logs for storage-provisioner [2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299] ...
	I1004 01:55:55.809150  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2c2e9a0977a2fc5cbd2ff1cd8d8b6a9bc07a9c13878523427a2c1b5abb519299"
	I1004 01:55:55.850557  166755 logs.go:123] Gathering logs for dmesg ...
	I1004 01:55:55.850595  166755 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1004 01:55:53.670067  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:55.670340  167496 pod_ready.go:102] pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace has status "Ready":"False"
	I1004 01:55:58.374828  166755 system_pods.go:59] 8 kube-system pods found
	I1004 01:55:58.374864  166755 system_pods.go:61] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.374871  166755 system_pods.go:61] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.374878  166755 system_pods.go:61] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.374885  166755 system_pods.go:61] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.374891  166755 system_pods.go:61] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.374898  166755 system_pods.go:61] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.374909  166755 system_pods.go:61] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.374919  166755 system_pods.go:61] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.374934  166755 system_pods.go:74] duration metric: took 3.919586902s to wait for pod list to return data ...
	I1004 01:55:58.374943  166755 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:55:58.379203  166755 default_sa.go:45] found service account: "default"
	I1004 01:55:58.379228  166755 default_sa.go:55] duration metric: took 4.271125ms for default service account to be created ...
	I1004 01:55:58.379237  166755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:55:58.389346  166755 system_pods.go:86] 8 kube-system pods found
	I1004 01:55:58.389369  166755 system_pods.go:89] "coredns-5dd5756b68-wkrdx" [0bc46efd-4d1e-4267-9992-d08e8dfe1e2c] Running
	I1004 01:55:58.389375  166755 system_pods.go:89] "etcd-no-preload-273516" [4c94c8db-3fd2-4c0f-bed5-d2c31d209623] Running
	I1004 01:55:58.389379  166755 system_pods.go:89] "kube-apiserver-no-preload-273516" [b7793fc0-fdfa-463a-aefc-c29657d4317f] Running
	I1004 01:55:58.389384  166755 system_pods.go:89] "kube-controller-manager-no-preload-273516" [34222ff3-5a73-4a33-b479-cbc8314cdfc1] Running
	I1004 01:55:58.389388  166755 system_pods.go:89] "kube-proxy-shlvt" [2a1c2fe3-4209-406d-8e28-74d5c3148c6d] Running
	I1004 01:55:58.389391  166755 system_pods.go:89] "kube-scheduler-no-preload-273516" [5421da5c-239a-4dff-be87-06ab12f1d63b] Running
	I1004 01:55:58.389399  166755 system_pods.go:89] "metrics-server-57f55c9bc5-mmm7c" [b0660d47-8147-4844-aa22-e8c4b4f40577] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:55:58.389404  166755 system_pods.go:89] "storage-provisioner" [9ee57ba0-6b8f-48cc-afe0-e946ec97f879] Running
	I1004 01:55:58.389411  166755 system_pods.go:126] duration metric: took 10.168718ms to wait for k8s-apps to be running ...
	I1004 01:55:58.389422  166755 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:55:58.389467  166755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:55:58.410785  166755 system_svc.go:56] duration metric: took 21.353423ms WaitForService to wait for kubelet.
	I1004 01:55:58.410814  166755 kubeadm.go:581] duration metric: took 4m24.523994722s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:55:58.410840  166755 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:55:58.414873  166755 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:55:58.414899  166755 node_conditions.go:123] node cpu capacity is 2
	I1004 01:55:58.414913  166755 node_conditions.go:105] duration metric: took 4.067596ms to run NodePressure ...
	I1004 01:55:58.414927  166755 start.go:228] waiting for startup goroutines ...
	I1004 01:55:58.414936  166755 start.go:233] waiting for cluster config update ...
	I1004 01:55:58.414948  166755 start.go:242] writing updated cluster config ...
	I1004 01:55:58.415228  166755 ssh_runner.go:195] Run: rm -f paused
	I1004 01:55:58.469095  166755 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 01:55:58.470860  166755 out.go:177] * Done! kubectl is now configured to use "no-preload-273516" cluster and "default" namespace by default
	I1004 01:55:57.863028  167496 pod_ready.go:81] duration metric: took 4m0.000377885s waiting for pod "metrics-server-74d5856cc6-s2lw2" in "kube-system" namespace to be "Ready" ...
	E1004 01:55:57.863064  167496 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 01:55:57.863085  167496 pod_ready.go:38] duration metric: took 4m1.198718353s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:55:57.863115  167496 kubeadm.go:640] restartCluster took 5m18.524534819s
	W1004 01:55:57.863173  167496 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 01:55:57.863207  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 01:56:02.773154  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.909900495s)
	I1004 01:56:02.773229  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:02.786455  167496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:56:02.796780  167496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:56:02.806618  167496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:56:02.806677  167496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1004 01:56:02.872853  167496 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1004 01:56:02.872972  167496 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 01:56:03.024967  167496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 01:56:03.025128  167496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 01:56:03.025294  167496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 01:56:03.249926  167496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 01:56:03.251503  167496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 01:56:03.259788  167496 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1004 01:56:03.380740  167496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 01:56:03.382796  167496 out.go:204]   - Generating certificates and keys ...
	I1004 01:56:03.382964  167496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 01:56:03.383087  167496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 01:56:03.383195  167496 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 01:56:03.383291  167496 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 01:56:03.383404  167496 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 01:56:03.383494  167496 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 01:56:03.383899  167496 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 01:56:03.384184  167496 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 01:56:03.384678  167496 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 01:56:03.385233  167496 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 01:56:03.385302  167496 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 01:56:03.385358  167496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 01:56:03.892124  167496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 01:56:04.106548  167496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 01:56:04.323375  167496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 01:56:04.510112  167496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 01:56:04.512389  167496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 01:56:02.634095  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:05.710104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:04.514200  167496 out.go:204]   - Booting up control plane ...
	I1004 01:56:04.514318  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 01:56:04.523675  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 01:56:04.534185  167496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 01:56:04.535396  167496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 01:56:04.551484  167496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 01:56:11.786134  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:14.564099  167496 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.011014 seconds
	I1004 01:56:14.564257  167496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 01:56:14.578656  167496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 01:56:15.106513  167496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 01:56:15.106688  167496 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-107182 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1004 01:56:15.616926  167496 kubeadm.go:322] [bootstrap-token] Using token: ocks1c.c9c0w76e1jxk27wy
	I1004 01:56:15.619692  167496 out.go:204]   - Configuring RBAC rules ...
	I1004 01:56:15.619849  167496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 01:56:15.627037  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 01:56:15.631821  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 01:56:15.635639  167496 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 01:56:15.641343  167496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 01:56:15.709440  167496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 01:56:16.046524  167496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 01:56:16.046544  167496 kubeadm.go:322] 
	I1004 01:56:16.046605  167496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 01:56:16.046616  167496 kubeadm.go:322] 
	I1004 01:56:16.046691  167496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 01:56:16.046698  167496 kubeadm.go:322] 
	I1004 01:56:16.046727  167496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 01:56:16.046781  167496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 01:56:16.046877  167496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 01:56:16.046902  167496 kubeadm.go:322] 
	I1004 01:56:16.046980  167496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 01:56:16.047101  167496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 01:56:16.047198  167496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 01:56:16.047210  167496 kubeadm.go:322] 
	I1004 01:56:16.047316  167496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1004 01:56:16.047429  167496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 01:56:16.047448  167496 kubeadm.go:322] 
	I1004 01:56:16.047560  167496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.047736  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 01:56:16.047783  167496 kubeadm.go:322]     --control-plane 	  
	I1004 01:56:16.047790  167496 kubeadm.go:322] 
	I1004 01:56:16.047912  167496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 01:56:16.047926  167496 kubeadm.go:322] 
	I1004 01:56:16.048006  167496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ocks1c.c9c0w76e1jxk27wy \
	I1004 01:56:16.048141  167496 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 01:56:16.048764  167496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 01:56:16.048792  167496 cni.go:84] Creating CNI manager for ""
	I1004 01:56:16.048803  167496 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:56:16.051468  167496 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:56:14.858093  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:16.052923  167496 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:56:16.062452  167496 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:56:16.083093  167496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 01:56:16.083231  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.083232  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=old-k8s-version-107182 minikube.k8s.io/updated_at=2023_10_04T01_56_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.097641  167496 ops.go:34] apiserver oom_adj: -16
	I1004 01:56:16.345591  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:16.432507  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:17.021142  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.938186  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:17.521246  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.020458  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:18.521120  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.020993  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:19.521313  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.020752  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:20.520524  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.020817  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:21.521038  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:22.020893  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.014159  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:22.520834  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.021375  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:23.521450  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.021541  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:24.521194  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.021420  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:25.521388  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.020861  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:26.520474  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:27.020520  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.094110  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:27.520733  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.020857  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:28.520471  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.020869  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:29.520801  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.020670  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:30.521376  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.021462  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:31.521133  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.021118  167496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 01:56:32.139808  167496 kubeadm.go:1081] duration metric: took 16.056644408s to wait for elevateKubeSystemPrivileges.
	I1004 01:56:32.139853  167496 kubeadm.go:406] StartCluster complete in 5m52.878327636s
	I1004 01:56:32.139879  167496 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.139983  167496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:56:32.143255  167496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:56:32.143507  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 01:56:32.143608  167496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 01:56:32.143692  167496 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143710  167496 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-107182"
	I1004 01:56:32.143708  167496 config.go:182] Loaded profile config "old-k8s-version-107182": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1004 01:56:32.143717  167496 addons.go:240] addon storage-provisioner should already be in state true
	I1004 01:56:32.143732  167496 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143751  167496 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-107182"
	W1004 01:56:32.143762  167496 addons.go:240] addon metrics-server should already be in state true
	I1004 01:56:32.143777  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143807  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.143717  167496 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-107182"
	I1004 01:56:32.143830  167496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-107182"
	I1004 01:56:32.144169  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144206  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144216  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144236  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.144237  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.144317  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.161736  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43833
	I1004 01:56:32.161739  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44057
	I1004 01:56:32.162384  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162494  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.162735  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40287
	I1004 01:56:32.163007  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163024  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163156  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163168  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163232  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.163731  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.163747  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.163809  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.163851  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164091  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.164163  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.164565  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.164611  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.165506  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.165553  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.168699  167496 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-107182"
	W1004 01:56:32.168721  167496 addons.go:240] addon default-storageclass should already be in state true
	I1004 01:56:32.168751  167496 host.go:66] Checking if "old-k8s-version-107182" exists ...
	I1004 01:56:32.169121  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.169148  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.187125  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41665
	I1004 01:56:32.187814  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188164  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34421
	I1004 01:56:32.188441  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.188462  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.188705  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.188823  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39633
	I1004 01:56:32.188990  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.189161  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.189340  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189357  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189428  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.189669  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.189688  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.189750  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190009  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.190037  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.190736  167496 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:56:32.190776  167496 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:56:32.191392  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.193250  167496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 01:56:32.192019  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.194795  167496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.194811  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 01:56:32.194833  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196365  167496 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 01:56:32.197757  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 01:56:32.197778  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 01:56:32.197798  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.196532  167496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-107182" context rescaled to 1 replicas
	I1004 01:56:32.197859  167496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.182 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 01:56:32.199796  167496 out.go:177] * Verifying Kubernetes components...
	I1004 01:56:32.201368  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:56:32.202167  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202462  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.202766  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.202794  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203229  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203304  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.203321  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.203485  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.203677  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.203744  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.204034  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204104  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.204194  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.204755  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.211128  167496 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I1004 01:56:32.211596  167496 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:56:32.212134  167496 main.go:141] libmachine: Using API Version  1
	I1004 01:56:32.212157  167496 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:56:32.212528  167496 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:56:32.212740  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetState
	I1004 01:56:32.214335  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .DriverName
	I1004 01:56:32.214592  167496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.214608  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 01:56:32.214627  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHHostname
	I1004 01:56:32.217280  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.217751  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:e7:48", ip: ""} in network mk-old-k8s-version-107182: {Iface:virbr1 ExpiryTime:2023-10-04 02:40:17 +0000 UTC Type:0 Mac:52:54:00:b4:e7:48 Iaid: IPaddr:192.168.72.182 Prefix:24 Hostname:old-k8s-version-107182 Clientid:01:52:54:00:b4:e7:48}
	I1004 01:56:32.217781  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | domain old-k8s-version-107182 has defined IP address 192.168.72.182 and MAC address 52:54:00:b4:e7:48 in network mk-old-k8s-version-107182
	I1004 01:56:32.218036  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHPort
	I1004 01:56:32.218202  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHKeyPath
	I1004 01:56:32.218378  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .GetSSHUsername
	I1004 01:56:32.218528  167496 sshutil.go:53] new ssh client: &{IP:192.168.72.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/old-k8s-version-107182/id_rsa Username:docker}
	I1004 01:56:32.390605  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 01:56:32.392051  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 01:56:32.434602  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 01:56:32.434629  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 01:56:32.469744  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 01:56:32.469793  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 01:56:32.488555  167496 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.489370  167496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 01:56:32.500794  167496 node_ready.go:49] node "old-k8s-version-107182" has status "Ready":"True"
	I1004 01:56:32.500818  167496 node_ready.go:38] duration metric: took 12.232731ms waiting for node "old-k8s-version-107182" to be "Ready" ...
	I1004 01:56:32.500828  167496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:32.514535  167496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:32.515832  167496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:32.515859  167496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 01:56:32.582811  167496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 01:56:33.449546  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.05890047s)
	I1004 01:56:33.449619  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.449635  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450076  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450100  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450113  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.450115  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.450139  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.450431  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.450454  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.450503  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.468938  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.468964  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.469311  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.469332  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.700534  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.308435267s)
	I1004 01:56:33.700563  167496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.211163368s)
	I1004 01:56:33.700582  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.700596  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.700593  167496 start.go:923] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1004 01:56:33.700975  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.700998  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.701010  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.701012  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701021  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.701273  167496 main.go:141] libmachine: (old-k8s-version-107182) DBG | Closing plugin on server side
	I1004 01:56:33.701321  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.701330  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823328  167496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.240468144s)
	I1004 01:56:33.823384  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823398  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.823769  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.823805  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.823819  167496 main.go:141] libmachine: Making call to close driver server
	I1004 01:56:33.823832  167496 main.go:141] libmachine: (old-k8s-version-107182) Calling .Close
	I1004 01:56:33.824142  167496 main.go:141] libmachine: Successfully made call to close driver server
	I1004 01:56:33.824164  167496 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 01:56:33.824176  167496 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-107182"
	I1004 01:56:33.825973  167496 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1004 01:56:33.162156  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:33.827977  167496 addons.go:502] enable addons completed in 1.684381662s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1004 01:56:34.532496  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:37.031254  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:39.242136  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:39.031853  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:41.531371  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:42.314165  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:44.032920  167496 pod_ready.go:102] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"False"
	I1004 01:56:44.533712  167496 pod_ready.go:92] pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.533740  167496 pod_ready.go:81] duration metric: took 12.019178851s waiting for pod "coredns-5644d7b6d9-nbf4s" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.533753  167496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539300  167496 pod_ready.go:92] pod "kube-proxy-8lcf5" in "kube-system" namespace has status "Ready":"True"
	I1004 01:56:44.539327  167496 pod_ready.go:81] duration metric: took 5.564927ms waiting for pod "kube-proxy-8lcf5" in "kube-system" namespace to be "Ready" ...
	I1004 01:56:44.539337  167496 pod_ready.go:38] duration metric: took 12.038496722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:56:44.539360  167496 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:56:44.539419  167496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:56:44.554851  167496 api_server.go:72] duration metric: took 12.356945821s to wait for apiserver process to appear ...
	I1004 01:56:44.554881  167496 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:56:44.554900  167496 api_server.go:253] Checking apiserver healthz at https://192.168.72.182:8443/healthz ...
	I1004 01:56:44.562352  167496 api_server.go:279] https://192.168.72.182:8443/healthz returned 200:
	ok
	I1004 01:56:44.563304  167496 api_server.go:141] control plane version: v1.16.0
	I1004 01:56:44.563333  167496 api_server.go:131] duration metric: took 8.444498ms to wait for apiserver health ...
	I1004 01:56:44.563344  167496 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:56:44.567672  167496 system_pods.go:59] 4 kube-system pods found
	I1004 01:56:44.567701  167496 system_pods.go:61] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.567708  167496 system_pods.go:61] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.567719  167496 system_pods.go:61] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.567728  167496 system_pods.go:61] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.567736  167496 system_pods.go:74] duration metric: took 4.384195ms to wait for pod list to return data ...
	I1004 01:56:44.567746  167496 default_sa.go:34] waiting for default service account to be created ...
	I1004 01:56:44.570566  167496 default_sa.go:45] found service account: "default"
	I1004 01:56:44.570597  167496 default_sa.go:55] duration metric: took 2.843182ms for default service account to be created ...
	I1004 01:56:44.570608  167496 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 01:56:44.575497  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.575524  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.575534  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.575543  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.575552  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.575572  167496 retry.go:31] will retry after 201.187376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:44.781105  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:44.781140  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:44.781146  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:44.781155  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:44.781162  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:44.781179  167496 retry.go:31] will retry after 304.433498ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.090030  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.090055  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.090061  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.090067  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.090073  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.090088  167496 retry.go:31] will retry after 344.077296ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.439684  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.439712  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.439717  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.439723  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.439729  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.439743  167496 retry.go:31] will retry after 379.883887ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:45.824813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:45.824839  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:45.824844  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:45.824853  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:45.824859  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:45.824873  167496 retry.go:31] will retry after 650.141708ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:46.480447  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:46.480473  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:46.480478  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:46.480486  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:46.480492  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:46.480507  167496 retry.go:31] will retry after 870.616376ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:47.356424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:47.356452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:47.356457  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:47.356464  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:47.356470  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:47.356486  167496 retry.go:31] will retry after 972.499927ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:48.394163  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:51.466067  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:48.333234  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:48.333263  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:48.333269  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:48.333276  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:48.333282  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:48.333296  167496 retry.go:31] will retry after 1.071674914s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:49.410813  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:49.410843  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:49.410853  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:49.410864  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:49.410873  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:49.410892  167496 retry.go:31] will retry after 1.833649065s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:51.251023  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:51.251046  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:51.251052  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:51.251058  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:51.251065  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:51.251080  167496 retry.go:31] will retry after 1.914402614s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:53.170633  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:53.170675  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:53.170684  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:53.170697  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:53.170706  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:53.170727  167496 retry.go:31] will retry after 2.900802753s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:56.077479  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:56.077505  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:56.077510  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:56.077517  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:56.077523  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:56.077539  167496 retry.go:31] will retry after 2.931373296s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:56:57.546142  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:00.618191  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:56:59.014602  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:56:59.014631  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:56:59.014639  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:56:59.014650  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:56:59.014658  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:56:59.014679  167496 retry.go:31] will retry after 3.641834809s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:06.698118  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:02.662919  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:02.662957  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:02.662962  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:02.662978  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:02.662986  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:02.663000  167496 retry.go:31] will retry after 5.249216721s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:09.770058  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:07.918510  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:07.918540  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:07.918545  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:07.918551  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:07.918558  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:07.918575  167496 retry.go:31] will retry after 5.21551618s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:15.850131  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:13.139424  167496 system_pods.go:86] 4 kube-system pods found
	I1004 01:57:13.139452  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:13.139461  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:13.139470  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:13.139480  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:13.139499  167496 retry.go:31] will retry after 6.379920631s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:18.922143  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:19.525272  167496 system_pods.go:86] 5 kube-system pods found
	I1004 01:57:19.525311  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:19.525322  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Pending
	I1004 01:57:19.525329  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:19.525340  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:19.525350  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:19.525372  167496 retry.go:31] will retry after 7.200178423s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1004 01:57:25.002152  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:26.734572  167496 system_pods.go:86] 6 kube-system pods found
	I1004 01:57:26.734603  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:26.734610  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:26.734615  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:26.734619  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:26.734626  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:26.734640  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:26.734662  167496 retry.go:31] will retry after 10.892871067s: missing components: etcd, kube-apiserver
	I1004 01:57:28.078109  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:34.158104  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:37.634963  167496 system_pods.go:86] 8 kube-system pods found
	I1004 01:57:37.634993  167496 system_pods.go:89] "coredns-5644d7b6d9-nbf4s" [8965b384-aa80-4e12-8323-4129cc7b53c3] Running
	I1004 01:57:37.634998  167496 system_pods.go:89] "etcd-old-k8s-version-107182" [18310540-21e4-4225-9ce0-e662fae16ca5] Running
	I1004 01:57:37.635003  167496 system_pods.go:89] "kube-apiserver-old-k8s-version-107182" [7418c38e-cae2-4d96-bb43-6827c37fc3dd] Running
	I1004 01:57:37.635008  167496 system_pods.go:89] "kube-controller-manager-old-k8s-version-107182" [d955fa80-9bb5-4326-8f56-97895c387f3d] Running
	I1004 01:57:37.635012  167496 system_pods.go:89] "kube-proxy-8lcf5" [50235cdc-deb8-47a6-974a-943636afd805] Running
	I1004 01:57:37.635015  167496 system_pods.go:89] "kube-scheduler-old-k8s-version-107182" [4fbb6d53-8041-46de-b5a4-52fdb4c08085] Running
	I1004 01:57:37.635023  167496 system_pods.go:89] "metrics-server-74d5856cc6-cl45r" [93297548-dde0-4cd3-b47f-a2a867cca7c4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:57:37.635028  167496 system_pods.go:89] "storage-provisioner" [71715868-9727-4d70-b5b4-5f0199e0579a] Running
	I1004 01:57:37.635035  167496 system_pods.go:126] duration metric: took 53.064420406s to wait for k8s-apps to be running ...
	I1004 01:57:37.635042  167496 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 01:57:37.635088  167496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:57:37.654311  167496 system_svc.go:56] duration metric: took 19.259695ms WaitForService to wait for kubelet.
	I1004 01:57:37.654335  167496 kubeadm.go:581] duration metric: took 1m5.456439597s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 01:57:37.654358  167496 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:57:37.658645  167496 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:57:37.658691  167496 node_conditions.go:123] node cpu capacity is 2
	I1004 01:57:37.658730  167496 node_conditions.go:105] duration metric: took 4.365872ms to run NodePressure ...
	I1004 01:57:37.658744  167496 start.go:228] waiting for startup goroutines ...
	I1004 01:57:37.658753  167496 start.go:233] waiting for cluster config update ...
	I1004 01:57:37.658763  167496 start.go:242] writing updated cluster config ...
	I1004 01:57:37.659093  167496 ssh_runner.go:195] Run: rm -f paused
	I1004 01:57:37.707603  167496 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1004 01:57:37.709678  167496 out.go:177] 
	W1004 01:57:37.711433  167496 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1004 01:57:37.713148  167496 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1004 01:57:37.714765  167496 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-107182" cluster and "default" namespace by default
	I1004 01:57:37.226085  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:43.306106  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:46.378086  169515 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.105:22: connect: no route to host
	I1004 01:57:49.379613  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:57:49.379686  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:57:49.381326  169515 machine.go:91] provisioned docker machine in 4m37.42034364s
	I1004 01:57:49.381400  169515 fix.go:56] fixHost completed within 4m37.441947276s
	I1004 01:57:49.381413  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 4m37.441976851s
	W1004 01:57:49.381431  169515 start.go:688] error starting host: provision: host is not running
	W1004 01:57:49.381511  169515 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1004 01:57:49.381527  169515 start.go:703] Will try again in 5 seconds ...
	I1004 01:57:54.381970  169515 start.go:365] acquiring machines lock for default-k8s-diff-port-239802: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 01:57:54.382105  169515 start.go:369] acquired machines lock for "default-k8s-diff-port-239802" in 82.376µs
	I1004 01:57:54.382139  169515 start.go:96] Skipping create...Using existing machine configuration
	I1004 01:57:54.382148  169515 fix.go:54] fixHost starting: 
	I1004 01:57:54.382415  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 01:57:54.382441  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:57:54.397922  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34433
	I1004 01:57:54.398391  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:57:54.398857  169515 main.go:141] libmachine: Using API Version  1
	I1004 01:57:54.398879  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:57:54.399227  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:57:54.399426  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:57:54.399606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 01:57:54.401353  169515 fix.go:102] recreateIfNeeded on default-k8s-diff-port-239802: state=Stopped err=<nil>
	I1004 01:57:54.401379  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	W1004 01:57:54.401556  169515 fix.go:128] unexpected machine state, will restart: <nil>
	I1004 01:57:54.403451  169515 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-239802" ...
	I1004 01:57:54.404883  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Start
	I1004 01:57:54.405065  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring networks are active...
	I1004 01:57:54.405797  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network default is active
	I1004 01:57:54.406184  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Ensuring network mk-default-k8s-diff-port-239802 is active
	I1004 01:57:54.406630  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Getting domain xml...
	I1004 01:57:54.407374  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Creating domain...
	I1004 01:57:55.768364  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting to get IP...
	I1004 01:57:55.769252  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769744  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.769819  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.769720  170429 retry.go:31] will retry after 205.391459ms: waiting for machine to come up
	I1004 01:57:55.977260  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977696  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:55.977721  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:55.977651  170429 retry.go:31] will retry after 308.679034ms: waiting for machine to come up
	I1004 01:57:56.288223  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288707  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.288740  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.288656  170429 retry.go:31] will retry after 419.166959ms: waiting for machine to come up
	I1004 01:57:56.708911  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709549  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:56.709581  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:56.709483  170429 retry.go:31] will retry after 402.015435ms: waiting for machine to come up
	I1004 01:57:57.113100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113682  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.113735  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.113608  170429 retry.go:31] will retry after 555.795777ms: waiting for machine to come up
	I1004 01:57:57.671427  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672087  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:57.672124  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:57.671985  170429 retry.go:31] will retry after 891.745334ms: waiting for machine to come up
	I1004 01:57:58.564986  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565501  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:58.565533  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:58.565436  170429 retry.go:31] will retry after 897.272137ms: waiting for machine to come up
	I1004 01:57:59.465110  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465742  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:57:59.465773  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:57:59.465695  170429 retry.go:31] will retry after 1.042370898s: waiting for machine to come up
	I1004 01:58:00.509812  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510320  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:00.510347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:00.510296  170429 retry.go:31] will retry after 1.512718285s: waiting for machine to come up
	I1004 01:58:02.024160  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024566  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:02.024599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:02.024502  170429 retry.go:31] will retry after 1.493800744s: waiting for machine to come up
	I1004 01:58:03.520361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520958  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:03.520991  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:03.520911  170429 retry.go:31] will retry after 2.206730553s: waiting for machine to come up
	I1004 01:58:05.729534  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730016  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:05.730050  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:05.729969  170429 retry.go:31] will retry after 3.088350315s: waiting for machine to come up
	I1004 01:58:08.820266  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820743  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:08.820774  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:08.820689  170429 retry.go:31] will retry after 2.773482095s: waiting for machine to come up
	I1004 01:58:11.595977  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596515  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | unable to find current IP address of domain default-k8s-diff-port-239802 in network mk-default-k8s-diff-port-239802
	I1004 01:58:11.596540  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | I1004 01:58:11.596475  170429 retry.go:31] will retry after 3.486376696s: waiting for machine to come up
	I1004 01:58:15.084904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.085418  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Found IP for machine: 192.168.61.105
	I1004 01:58:15.085447  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserving static IP address...
	I1004 01:58:15.085460  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has current primary IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.086007  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.086039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Reserved static IP address: 192.168.61.105
	I1004 01:58:15.086059  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | skip adding static IP to network mk-default-k8s-diff-port-239802 - found existing host DHCP lease matching {name: "default-k8s-diff-port-239802", mac: "52:54:00:4b:98:4e", ip: "192.168.61.105"}
	I1004 01:58:15.086080  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Getting to WaitForSSH function...
	I1004 01:58:15.086098  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Waiting for SSH to be available...
	I1004 01:58:15.088134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088506  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.088538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.088726  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH client type: external
	I1004 01:58:15.088751  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa (-rw-------)
	I1004 01:58:15.088802  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.105 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 01:58:15.088817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | About to run SSH command:
	I1004 01:58:15.088829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | exit 0
	I1004 01:58:15.226051  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | SSH cmd err, output: <nil>: 
	I1004 01:58:15.226408  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetConfigRaw
	I1004 01:58:15.227055  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.229669  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230073  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.230108  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.230390  169515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/config.json ...
	I1004 01:58:15.230651  169515 machine.go:88] provisioning docker machine ...
	I1004 01:58:15.230676  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:15.230912  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231113  169515 buildroot.go:166] provisioning hostname "default-k8s-diff-port-239802"
	I1004 01:58:15.231134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.231297  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.233606  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.233990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.234026  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.234134  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.234317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234484  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.234663  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.234867  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.235199  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.235213  169515 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-239802 && echo "default-k8s-diff-port-239802" | sudo tee /etc/hostname
	I1004 01:58:15.374541  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-239802
	
	I1004 01:58:15.374573  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.377761  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378278  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.378321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.378494  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.378705  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.378967  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.379135  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.379569  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.379594  169515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-239802' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-239802/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-239802' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 01:58:15.520076  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 01:58:15.520107  169515 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 01:58:15.520129  169515 buildroot.go:174] setting up certificates
	I1004 01:58:15.520141  169515 provision.go:83] configureAuth start
	I1004 01:58:15.520155  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetMachineName
	I1004 01:58:15.520502  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:15.523317  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.523814  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.523854  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.524058  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.526453  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526752  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.526794  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.526920  169515 provision.go:138] copyHostCerts
	I1004 01:58:15.526985  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 01:58:15.527069  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 01:58:15.527197  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 01:58:15.527323  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 01:58:15.527337  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 01:58:15.527373  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 01:58:15.527450  169515 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 01:58:15.527460  169515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 01:58:15.527490  169515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 01:58:15.527550  169515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-239802 san=[192.168.61.105 192.168.61.105 localhost 127.0.0.1 minikube default-k8s-diff-port-239802]
	I1004 01:58:15.632152  169515 provision.go:172] copyRemoteCerts
	I1004 01:58:15.632211  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 01:58:15.632236  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.635344  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.635733  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.635886  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.636100  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.636262  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.636411  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:15.731442  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1004 01:58:15.755690  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 01:58:15.781135  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 01:58:15.805779  169515 provision.go:86] duration metric: configureAuth took 285.621049ms
	I1004 01:58:15.805813  169515 buildroot.go:189] setting minikube options for container-runtime
	I1004 01:58:15.806097  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:58:15.806193  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:15.809186  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809599  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:15.809648  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:15.809847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:15.810105  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810354  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:15.810577  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:15.810822  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:15.811265  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:15.811283  169515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 01:58:16.145471  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 01:58:16.145515  169515 machine.go:91] provisioned docker machine in 914.847777ms
	I1004 01:58:16.145528  169515 start.go:300] post-start starting for "default-k8s-diff-port-239802" (driver="kvm2")
	I1004 01:58:16.145541  169515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 01:58:16.145564  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.145936  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 01:58:16.145970  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.148759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149272  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.149306  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.149563  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.149803  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.150023  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.150185  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.245579  169515 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 01:58:16.250364  169515 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 01:58:16.250394  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 01:58:16.250472  169515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 01:58:16.250566  169515 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 01:58:16.250821  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 01:58:16.260991  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:16.283999  169515 start.go:303] post-start completed in 138.45373ms
	I1004 01:58:16.284022  169515 fix.go:56] fixHost completed within 21.901874601s
	I1004 01:58:16.284043  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.286817  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287150  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.287174  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.287383  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.287598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287759  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.287848  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.288010  169515 main.go:141] libmachine: Using SSH client type: native
	I1004 01:58:16.288381  169515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.61.105 22 <nil> <nil>}
	I1004 01:58:16.288414  169515 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 01:58:16.418775  169515 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696384696.400645117
	
	I1004 01:58:16.418799  169515 fix.go:206] guest clock: 1696384696.400645117
	I1004 01:58:16.418806  169515 fix.go:219] Guest: 2023-10-04 01:58:16.400645117 +0000 UTC Remote: 2023-10-04 01:58:16.284026062 +0000 UTC m=+304.486597710 (delta=116.619055ms)
	I1004 01:58:16.418832  169515 fix.go:190] guest clock delta is within tolerance: 116.619055ms
	I1004 01:58:16.418837  169515 start.go:83] releasing machines lock for "default-k8s-diff-port-239802", held for 22.036713239s
	I1004 01:58:16.418861  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.419152  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:16.421829  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422225  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.422265  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.422402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.422990  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423191  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 01:58:16.423288  169515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 01:58:16.423361  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.423400  169515 ssh_runner.go:195] Run: cat /version.json
	I1004 01:58:16.423430  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 01:58:16.426244  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426412  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426666  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426694  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.426835  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.426903  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:16.426928  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:16.427049  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427079  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 01:58:16.427257  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427305  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 01:58:16.427389  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.427491  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 01:58:16.427616  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 01:58:16.541652  169515 ssh_runner.go:195] Run: systemctl --version
	I1004 01:58:16.548207  169515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 01:58:16.689236  169515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 01:58:16.695609  169515 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 01:58:16.695700  169515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 01:58:16.711541  169515 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 01:58:16.711569  169515 start.go:469] detecting cgroup driver to use...
	I1004 01:58:16.711648  169515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 01:58:16.727693  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 01:58:16.741081  169515 docker.go:197] disabling cri-docker service (if available) ...
	I1004 01:58:16.741145  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 01:58:16.754740  169515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 01:58:16.768697  169515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 01:58:16.892808  169515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 01:58:17.012129  169515 docker.go:213] disabling docker service ...
	I1004 01:58:17.012203  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 01:58:17.027872  169515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 01:58:17.039804  169515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 01:58:17.138577  169515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 01:58:17.242819  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 01:58:17.255768  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 01:58:17.273761  169515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 01:58:17.273824  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.284028  169515 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 01:58:17.284103  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.294763  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.304668  169515 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 01:58:17.314305  169515 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 01:58:17.324280  169515 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 01:58:17.333123  169515 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 01:58:17.333181  169515 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 01:58:17.346921  169515 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 01:58:17.357411  169515 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 01:58:17.466076  169515 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 01:58:17.665370  169515 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 01:58:17.665446  169515 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 01:58:17.671020  169515 start.go:537] Will wait 60s for crictl version
	I1004 01:58:17.671103  169515 ssh_runner.go:195] Run: which crictl
	I1004 01:58:17.675046  169515 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 01:58:17.711171  169515 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 01:58:17.711255  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.764684  169515 ssh_runner.go:195] Run: crio --version
	I1004 01:58:17.818887  169515 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 01:58:17.820580  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetIP
	I1004 01:58:17.823598  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824003  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 01:58:17.824039  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 01:58:17.824180  169515 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1004 01:58:17.828529  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:17.842201  169515 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 01:58:17.842277  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:17.889167  169515 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 01:58:17.889260  169515 ssh_runner.go:195] Run: which lz4
	I1004 01:58:17.893479  169515 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 01:58:17.898162  169515 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 01:58:17.898208  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 01:58:19.729377  169515 crio.go:444] Took 1.835934 seconds to copy over tarball
	I1004 01:58:19.729456  169515 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 01:58:22.593494  169515 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.864005818s)
	I1004 01:58:22.593526  169515 crio.go:451] Took 2.864115 seconds to extract the tarball
	I1004 01:58:22.593541  169515 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 01:58:22.637806  169515 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 01:58:22.688382  169515 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 01:58:22.688411  169515 cache_images.go:84] Images are preloaded, skipping loading
	I1004 01:58:22.688492  169515 ssh_runner.go:195] Run: crio config
	I1004 01:58:22.763035  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:22.763056  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:22.763523  169515 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 01:58:22.763558  169515 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.105 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-239802 NodeName:default-k8s-diff-port-239802 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.105"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.105 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 01:58:22.763710  169515 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.105
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-239802"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.105
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.105"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 01:58:22.763781  169515 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-239802 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.105
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1004 01:58:22.763836  169515 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 01:58:22.772839  169515 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 01:58:22.772912  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 01:58:22.781165  169515 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I1004 01:58:22.799884  169515 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 01:58:22.817806  169515 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I1004 01:58:22.836379  169515 ssh_runner.go:195] Run: grep 192.168.61.105	control-plane.minikube.internal$ /etc/hosts
	I1004 01:58:22.840577  169515 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.105	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 01:58:22.854009  169515 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802 for IP: 192.168.61.105
	I1004 01:58:22.854051  169515 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 01:58:22.854225  169515 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 01:58:22.854280  169515 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 01:58:22.854390  169515 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/client.key
	I1004 01:58:22.854470  169515 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key.c44c9625
	I1004 01:58:22.854525  169515 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key
	I1004 01:58:22.854676  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 01:58:22.854716  169515 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 01:58:22.854731  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 01:58:22.854795  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 01:58:22.854841  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 01:58:22.854874  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 01:58:22.854936  169515 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 01:58:22.855704  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 01:58:22.883055  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1004 01:58:22.909260  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 01:58:22.936140  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/default-k8s-diff-port-239802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1004 01:58:22.963068  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 01:58:22.990358  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 01:58:23.019293  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 01:58:23.046021  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 01:58:23.072727  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 01:58:23.099530  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 01:58:23.125965  169515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 01:58:23.152909  169515 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 01:58:23.171043  169515 ssh_runner.go:195] Run: openssl version
	I1004 01:58:23.177062  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 01:58:23.187693  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192607  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.192695  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 01:58:23.198687  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 01:58:23.208870  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 01:58:23.220345  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225134  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.225205  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 01:58:23.230830  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 01:58:23.241519  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 01:58:23.251661  169515 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256671  169515 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.256740  169515 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 01:58:23.263041  169515 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 01:58:23.272914  169515 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 01:58:23.277650  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1004 01:58:23.283889  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1004 01:58:23.289960  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1004 01:58:23.295853  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1004 01:58:23.302386  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1004 01:58:23.308626  169515 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1004 01:58:23.315173  169515 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-239802 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-239802 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 01:58:23.315270  169515 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 01:58:23.315329  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:23.360078  169515 cri.go:89] found id: ""
	I1004 01:58:23.360160  169515 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 01:58:23.370577  169515 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1004 01:58:23.370607  169515 kubeadm.go:636] restartCluster start
	I1004 01:58:23.370670  169515 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1004 01:58:23.380554  169515 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.382064  169515 kubeconfig.go:92] found "default-k8s-diff-port-239802" server: "https://192.168.61.105:8444"
	I1004 01:58:23.384489  169515 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1004 01:58:23.394552  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.394621  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.406027  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.406050  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.406088  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.416731  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:23.917459  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:23.917567  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:23.929055  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.417118  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.417196  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.429944  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:24.917530  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:24.917640  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:24.928908  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.417526  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.417598  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.429815  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:25.917482  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:25.917579  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:25.928966  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.417583  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.417703  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.429371  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:26.917165  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:26.917259  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:26.929210  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.417701  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.417803  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.429305  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:27.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:27.917024  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:27.928702  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.417024  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.417142  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.428772  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:28.917340  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:28.917439  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:28.929099  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.417234  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.417333  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.429431  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:29.916874  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:29.916967  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:29.928613  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.417157  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.417247  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.429364  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:30.916913  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:30.917013  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:30.928682  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.417225  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.417328  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.429087  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:31.917131  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:31.917218  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:31.929475  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.416979  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.417061  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.431474  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:32.917018  169515 api_server.go:166] Checking apiserver status ...
	I1004 01:58:32.917123  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1004 01:58:32.929083  169515 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1004 01:58:33.394900  169515 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1004 01:58:33.394937  169515 kubeadm.go:1128] stopping kube-system containers ...
	I1004 01:58:33.394955  169515 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1004 01:58:33.395025  169515 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 01:58:33.439584  169515 cri.go:89] found id: ""
	I1004 01:58:33.439676  169515 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1004 01:58:33.455188  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 01:58:33.464838  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 01:58:33.464909  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473594  169515 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1004 01:58:33.473622  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:33.606598  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.496399  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.698397  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.778632  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:34.858383  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 01:58:34.858475  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:34.871386  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.384197  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:35.884575  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.383599  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:36.883552  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.384513  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:58:37.409737  169515 api_server.go:72] duration metric: took 2.551352833s to wait for apiserver process to appear ...
	I1004 01:58:37.409768  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 01:58:37.409791  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410400  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.410464  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:37.410871  169515 api_server.go:269] stopped: https://192.168.61.105:8444/healthz: Get "https://192.168.61.105:8444/healthz": dial tcp 192.168.61.105:8444: connect: connection refused
	I1004 01:58:37.911616  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.733688  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.733788  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.733802  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.789718  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1004 01:58:41.789758  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1004 01:58:41.911398  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:41.919484  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:41.919510  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.411543  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.417441  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.417474  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:42.910983  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:42.918972  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1004 01:58:42.918999  169515 api_server.go:103] status: https://192.168.61.105:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1004 01:58:43.411752  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 01:58:43.418030  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 01:58:43.429647  169515 api_server.go:141] control plane version: v1.28.2
	I1004 01:58:43.429678  169515 api_server.go:131] duration metric: took 6.019900977s to wait for apiserver health ...
	I1004 01:58:43.429690  169515 cni.go:84] Creating CNI manager for ""
	I1004 01:58:43.429697  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 01:58:43.431972  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 01:58:43.433484  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 01:58:43.447694  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 01:58:43.471374  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 01:58:43.481660  169515 system_pods.go:59] 8 kube-system pods found
	I1004 01:58:43.481703  169515 system_pods.go:61] "coredns-5dd5756b68-ntmdn" [93a30dd9-0d38-4648-9291-703928437ead] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1004 01:58:43.481716  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [387a9b5c-12b7-4be8-ab2a-a05f15640f17] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1004 01:58:43.481725  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [a9900212-1372-410f-b6d9-105f78dfde92] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1004 01:58:43.481735  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [d9684911-65f2-4b81-800a-9d99b277b7e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1004 01:58:43.481747  169515 system_pods.go:61] "kube-proxy-v9qw4" [6db82ea2-130c-4f40-ae3e-2abe4fdb2860] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1004 01:58:43.481757  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [98b82b29-64c3-4042-bf6b-040b05992648] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1004 01:58:43.481770  169515 system_pods.go:61] "metrics-server-57f55c9bc5-hxrqk" [94e85ebf-dba5-4975-8167-bc23dc74b5f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 01:58:43.481789  169515 system_pods.go:61] "storage-provisioner" [11d1866b-ef0b-4b12-a2d3-a38fe68f5184] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1004 01:58:43.481801  169515 system_pods.go:74] duration metric: took 10.402243ms to wait for pod list to return data ...
	I1004 01:58:43.481815  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 01:58:43.485997  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 01:58:43.486041  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 01:58:43.486056  169515 node_conditions.go:105] duration metric: took 4.234155ms to run NodePressure ...
	I1004 01:58:43.486078  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1004 01:58:43.740784  169515 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749933  169515 kubeadm.go:787] kubelet initialised
	I1004 01:58:43.749956  169515 kubeadm.go:788] duration metric: took 9.146841ms waiting for restarted kubelet to initialise ...
	I1004 01:58:43.749964  169515 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 01:58:43.762449  169515 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:45.795545  169515 pod_ready.go:102] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:47.294570  169515 pod_ready.go:92] pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:47.294593  169515 pod_ready.go:81] duration metric: took 3.532106169s waiting for pod "coredns-5dd5756b68-ntmdn" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:47.294629  169515 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:49.318426  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.320090  169515 pod_ready.go:102] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:51.819783  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.819808  169515 pod_ready.go:81] duration metric: took 4.525169791s waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.819820  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825714  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:51.825738  169515 pod_ready.go:81] duration metric: took 5.910346ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:51.825750  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345345  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.345375  169515 pod_ready.go:81] duration metric: took 519.614193ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.345388  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351098  169515 pod_ready.go:92] pod "kube-proxy-v9qw4" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.351115  169515 pod_ready.go:81] duration metric: took 5.721421ms waiting for pod "kube-proxy-v9qw4" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.351123  169515 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675957  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 01:58:52.675986  169515 pod_ready.go:81] duration metric: took 324.855954ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:52.675999  169515 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	I1004 01:58:54.985434  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:56.986014  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:58:59.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:01.984178  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:03.986718  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:06.486121  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:08.986286  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:10.988493  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:13.487313  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:15.986463  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:17.987092  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:20.484986  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:22.985012  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:25.486297  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:27.988254  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:30.486124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:32.486163  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:34.986124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:36.986217  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:39.485494  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:41.485638  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:43.987966  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:46.484556  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:48.984057  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:50.984900  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:53.483808  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:55.484765  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:57.485763  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 01:59:59.985726  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:02.484831  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:04.985989  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:07.485664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:09.485893  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:11.985932  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:13.986799  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:16.488334  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:18.985949  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:21.485124  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:23.986108  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:26.486381  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:28.984912  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:31.484885  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:33.485511  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:35.485786  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:37.985061  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:40.486400  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:42.985255  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:45.485905  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:47.985646  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:49.988812  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:52.485077  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:54.485567  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:56.486128  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:00:58.486811  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:00.985292  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:02.985432  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:04.990218  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:07.485695  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:09.485758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:11.985237  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:13.988632  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:16.486921  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:18.986300  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:21.486008  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:23.990988  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:26.486730  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:28.984846  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:30.985403  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:32.985500  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:34.989615  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:37.485216  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:39.985745  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:42.485969  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:44.984000  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:46.984954  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:49.485168  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:51.986705  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:53.987005  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:56.484664  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:01:58.485697  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:00.486876  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:02.986832  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:05.485817  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:07.486977  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:09.984945  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:11.985637  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:13.985859  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:16.484825  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:18.485020  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:20.485388  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:22.486622  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:24.985561  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:27.484794  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:29.986684  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:32.494495  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:34.984951  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:36.985082  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:38.987881  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:41.485453  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:43.486758  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:45.983941  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:47.984452  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:50.486243  169515 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace has status "Ready":"False"
	I1004 02:02:52.676831  169515 pod_ready.go:81] duration metric: took 4m0.000812817s waiting for pod "metrics-server-57f55c9bc5-hxrqk" in "kube-system" namespace to be "Ready" ...
	E1004 02:02:52.676871  169515 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1004 02:02:52.676911  169515 pod_ready.go:38] duration metric: took 4m8.926937921s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:02:52.676950  169515 kubeadm.go:640] restartCluster took 4m29.306332407s
	W1004 02:02:52.677028  169515 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1004 02:02:52.677066  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1004 02:03:06.687598  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.010492171s)
	I1004 02:03:06.687683  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:06.702277  169515 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:03:06.711887  169515 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:03:06.721545  169515 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:03:06.721606  169515 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:03:06.964165  169515 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:03:17.591049  169515 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:03:17.591142  169515 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:03:17.591233  169515 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:03:17.591398  169515 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:03:17.591561  169515 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:03:17.591679  169515 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:03:17.593418  169515 out.go:204]   - Generating certificates and keys ...
	I1004 02:03:17.593514  169515 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:03:17.593593  169515 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:03:17.593716  169515 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1004 02:03:17.593817  169515 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1004 02:03:17.593913  169515 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1004 02:03:17.593964  169515 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1004 02:03:17.594015  169515 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1004 02:03:17.594064  169515 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1004 02:03:17.594137  169515 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1004 02:03:17.594216  169515 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1004 02:03:17.594254  169515 kubeadm.go:322] [certs] Using the existing "sa" key
	I1004 02:03:17.594318  169515 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:03:17.594374  169515 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:03:17.594446  169515 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:03:17.594525  169515 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:03:17.594596  169515 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:03:17.594701  169515 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:03:17.594785  169515 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:03:17.596492  169515 out.go:204]   - Booting up control plane ...
	I1004 02:03:17.596593  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:03:17.596678  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:03:17.596767  169515 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:03:17.596903  169515 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:03:17.597026  169515 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:03:17.597087  169515 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:03:17.597271  169515 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:03:17.597365  169515 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004292 seconds
	I1004 02:03:17.597507  169515 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:03:17.597663  169515 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:03:17.597752  169515 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:03:17.598019  169515 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-239802 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:03:17.598091  169515 kubeadm.go:322] [bootstrap-token] Using token: 23w16s.bx0je8b3n2xujqpx
	I1004 02:03:17.599777  169515 out.go:204]   - Configuring RBAC rules ...
	I1004 02:03:17.599892  169515 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:03:17.600022  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:03:17.600211  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:03:17.600376  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:03:17.600517  169515 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:03:17.600640  169515 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:03:17.600774  169515 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:03:17.600836  169515 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:03:17.600895  169515 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:03:17.600908  169515 kubeadm.go:322] 
	I1004 02:03:17.600957  169515 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:03:17.600963  169515 kubeadm.go:322] 
	I1004 02:03:17.601026  169515 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:03:17.601032  169515 kubeadm.go:322] 
	I1004 02:03:17.601053  169515 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:03:17.601102  169515 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:03:17.601157  169515 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:03:17.601164  169515 kubeadm.go:322] 
	I1004 02:03:17.601213  169515 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:03:17.601226  169515 kubeadm.go:322] 
	I1004 02:03:17.601282  169515 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:03:17.601289  169515 kubeadm.go:322] 
	I1004 02:03:17.601369  169515 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:03:17.601470  169515 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:03:17.601584  169515 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:03:17.601594  169515 kubeadm.go:322] 
	I1004 02:03:17.601698  169515 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:03:17.601780  169515 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:03:17.601791  169515 kubeadm.go:322] 
	I1004 02:03:17.601919  169515 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602052  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:03:17.602084  169515 kubeadm.go:322] 	--control-plane 
	I1004 02:03:17.602094  169515 kubeadm.go:322] 
	I1004 02:03:17.602212  169515 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:03:17.602221  169515 kubeadm.go:322] 
	I1004 02:03:17.602358  169515 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token 23w16s.bx0je8b3n2xujqpx \
	I1004 02:03:17.602512  169515 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:03:17.602532  169515 cni.go:84] Creating CNI manager for ""
	I1004 02:03:17.602543  169515 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 02:03:17.605029  169515 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:03:17.606395  169515 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:03:17.633626  169515 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 02:03:17.708983  169515 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:03:17.709074  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.709079  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=default-k8s-diff-port-239802 minikube.k8s.io/updated_at=2023_10_04T02_03_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:17.817989  169515 ops.go:34] apiserver oom_adj: -16
	I1004 02:03:18.073171  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.187308  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:18.820889  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.320388  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:19.820323  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.320333  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:20.821163  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.320330  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:21.821019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.321019  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:22.821177  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.321168  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:23.820299  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.320582  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:24.820863  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.320469  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:25.820489  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.321120  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:26.820999  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.321119  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:27.820996  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.320295  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:28.821014  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.320832  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:29.820960  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.321064  169515 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:03:30.472351  169515 kubeadm.go:1081] duration metric: took 12.76333985s to wait for elevateKubeSystemPrivileges.
	I1004 02:03:30.472398  169515 kubeadm.go:406] StartCluster complete in 5m7.157236676s
	I1004 02:03:30.472421  169515 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.472516  169515 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:03:30.474474  169515 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:03:30.474744  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:03:30.474777  169515 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:03:30.474868  169515 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474889  169515 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474894  169515 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474903  169515 addons.go:240] addon storage-provisioner should already be in state true
	I1004 02:03:30.474906  169515 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-239802"
	I1004 02:03:30.474929  169515 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.474938  169515 addons.go:240] addon metrics-server should already be in state true
	I1004 02:03:30.474973  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474985  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.474911  169515 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-239802"
	I1004 02:03:30.474998  169515 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475437  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475468  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475439  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.475392  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.475657  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.493623  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35603
	I1004 02:03:30.493662  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37225
	I1004 02:03:30.493781  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33157
	I1004 02:03:30.494163  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494166  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494444  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.494788  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494790  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.494812  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.494815  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495193  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.495213  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495237  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495402  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.495555  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.495810  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.495842  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.496520  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.496559  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.499305  169515 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-239802"
	W1004 02:03:30.499322  169515 addons.go:240] addon default-storageclass should already be in state true
	I1004 02:03:30.499345  169515 host.go:66] Checking if "default-k8s-diff-port-239802" exists ...
	I1004 02:03:30.499914  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.499942  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.514137  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44377
	I1004 02:03:30.514752  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.515464  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.515494  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.515576  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I1004 02:03:30.515848  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.515990  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.516030  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.516461  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.516481  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.516840  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.517034  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.518156  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.518191  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36731
	I1004 02:03:30.521584  169515 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1004 02:03:30.518793  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.518847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.522961  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1004 02:03:30.522981  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1004 02:03:30.523002  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.524589  169515 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:03:30.523376  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.524627  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.525081  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.525873  169515 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.525888  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:03:30.525904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.526430  169515 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:03:30.526461  169515 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:03:30.526677  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.530913  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.531170  169515 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-239802" context rescaled to 1 replicas
	I1004 02:03:30.531206  169515 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.105 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:03:30.532986  169515 out.go:177] * Verifying Kubernetes components...
	I1004 02:03:30.531340  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.531757  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.533318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.533937  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.535094  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:30.535197  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.535227  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.535231  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535394  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.535440  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.535914  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.535943  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.536116  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.549570  169515 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33239
	I1004 02:03:30.550039  169515 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:03:30.550714  169515 main.go:141] libmachine: Using API Version  1
	I1004 02:03:30.550744  169515 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:03:30.551157  169515 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:03:30.551347  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetState
	I1004 02:03:30.553113  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .DriverName
	I1004 02:03:30.553403  169515 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.553418  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:03:30.553433  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHHostname
	I1004 02:03:30.555904  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556293  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:98:4e", ip: ""} in network mk-default-k8s-diff-port-239802: {Iface:virbr5 ExpiryTime:2023-10-04 02:58:07 +0000 UTC Type:0 Mac:52:54:00:4b:98:4e Iaid: IPaddr:192.168.61.105 Prefix:24 Hostname:default-k8s-diff-port-239802 Clientid:01:52:54:00:4b:98:4e}
	I1004 02:03:30.556318  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | domain default-k8s-diff-port-239802 has defined IP address 192.168.61.105 and MAC address 52:54:00:4b:98:4e in network mk-default-k8s-diff-port-239802
	I1004 02:03:30.556538  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHPort
	I1004 02:03:30.556748  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHKeyPath
	I1004 02:03:30.556908  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .GetSSHUsername
	I1004 02:03:30.557059  169515 sshutil.go:53] new ssh client: &{IP:192.168.61.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/default-k8s-diff-port-239802/id_rsa Username:docker}
	I1004 02:03:30.745640  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:03:30.772975  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1004 02:03:30.772997  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1004 02:03:30.828675  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:03:30.862436  169515 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.862505  169515 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:03:30.867582  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1004 02:03:30.867606  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1004 02:03:30.869762  169515 node_ready.go:49] node "default-k8s-diff-port-239802" has status "Ready":"True"
	I1004 02:03:30.869782  169515 node_ready.go:38] duration metric: took 7.313127ms waiting for node "default-k8s-diff-port-239802" to be "Ready" ...
	I1004 02:03:30.869791  169515 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:30.878259  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:30.953707  169515 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:30.953739  169515 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1004 02:03:31.080848  169515 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1004 02:03:31.923980  169515 pod_ready.go:97] error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924020  169515 pod_ready.go:81] duration metric: took 1.045735768s waiting for pod "coredns-5dd5756b68-br77m" in "kube-system" namespace to be "Ready" ...
	E1004 02:03:31.924034  169515 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-br77m" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-br77m" not found
	I1004 02:03:31.924041  169515 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.089720  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.344027143s)
	I1004 02:03:33.089798  169515 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.227266643s)
	I1004 02:03:33.089820  169515 start.go:923] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1004 02:03:33.089826  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089749  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.261039922s)
	I1004 02:03:33.089847  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.089856  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.089872  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090197  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090217  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090228  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090226  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090240  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090292  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090310  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090322  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.090333  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.090332  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.090486  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.090501  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.090993  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.091015  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.120294  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.120321  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.120639  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.120660  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379169  169515 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.298272317s)
	I1004 02:03:33.379231  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379247  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379568  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379585  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379595  169515 main.go:141] libmachine: Making call to close driver server
	I1004 02:03:33.379608  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) Calling .Close
	I1004 02:03:33.379884  169515 main.go:141] libmachine: (default-k8s-diff-port-239802) DBG | Closing plugin on server side
	I1004 02:03:33.379928  169515 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:03:33.379952  169515 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:03:33.379965  169515 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-239802"
	I1004 02:03:33.382638  169515 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1004 02:03:33.384185  169515 addons.go:502] enable addons completed in 2.909411548s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1004 02:03:33.970600  169515 pod_ready.go:92] pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.970634  169515 pod_ready.go:81] duration metric: took 2.046583312s waiting for pod "coredns-5dd5756b68-gjn6v" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.970649  169515 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976833  169515 pod_ready.go:92] pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.976858  169515 pod_ready.go:81] duration metric: took 6.200437ms waiting for pod "etcd-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.976870  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.983984  169515 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:33.984006  169515 pod_ready.go:81] duration metric: took 7.126822ms waiting for pod "kube-apiserver-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:33.984016  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269435  169515 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.269462  169515 pod_ready.go:81] duration metric: took 285.437635ms waiting for pod "kube-controller-manager-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.269476  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667111  169515 pod_ready.go:92] pod "kube-proxy-b5ltp" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:34.667138  169515 pod_ready.go:81] duration metric: took 397.655055ms waiting for pod "kube-proxy-b5ltp" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:34.667147  169515 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068656  169515 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace has status "Ready":"True"
	I1004 02:03:35.068692  169515 pod_ready.go:81] duration metric: took 401.53728ms waiting for pod "kube-scheduler-default-k8s-diff-port-239802" in "kube-system" namespace to be "Ready" ...
	I1004 02:03:35.068706  169515 pod_ready.go:38] duration metric: took 4.198904278s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:03:35.068731  169515 api_server.go:52] waiting for apiserver process to appear ...
	I1004 02:03:35.068800  169515 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 02:03:35.085104  169515 api_server.go:72] duration metric: took 4.553859804s to wait for apiserver process to appear ...
	I1004 02:03:35.085129  169515 api_server.go:88] waiting for apiserver healthz status ...
	I1004 02:03:35.085148  169515 api_server.go:253] Checking apiserver healthz at https://192.168.61.105:8444/healthz ...
	I1004 02:03:35.093144  169515 api_server.go:279] https://192.168.61.105:8444/healthz returned 200:
	ok
	I1004 02:03:35.094563  169515 api_server.go:141] control plane version: v1.28.2
	I1004 02:03:35.094583  169515 api_server.go:131] duration metric: took 9.447369ms to wait for apiserver health ...
	I1004 02:03:35.094591  169515 system_pods.go:43] waiting for kube-system pods to appear ...
	I1004 02:03:35.271828  169515 system_pods.go:59] 8 kube-system pods found
	I1004 02:03:35.271855  169515 system_pods.go:61] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.271862  169515 system_pods.go:61] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.271867  169515 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.271871  169515 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.271875  169515 system_pods.go:61] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.271879  169515 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.271886  169515 system_pods.go:61] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.271891  169515 system_pods.go:61] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.271899  169515 system_pods.go:74] duration metric: took 177.302484ms to wait for pod list to return data ...
	I1004 02:03:35.271906  169515 default_sa.go:34] waiting for default service account to be created ...
	I1004 02:03:35.466915  169515 default_sa.go:45] found service account: "default"
	I1004 02:03:35.466956  169515 default_sa.go:55] duration metric: took 195.042376ms for default service account to be created ...
	I1004 02:03:35.466968  169515 system_pods.go:116] waiting for k8s-apps to be running ...
	I1004 02:03:35.669331  169515 system_pods.go:86] 8 kube-system pods found
	I1004 02:03:35.669358  169515 system_pods.go:89] "coredns-5dd5756b68-gjn6v" [18ad413f-043e-443c-ad1c-83d04099b47d] Running
	I1004 02:03:35.669363  169515 system_pods.go:89] "etcd-default-k8s-diff-port-239802" [32951ff0-d25c-419b-92fc-a13f4643d0a2] Running
	I1004 02:03:35.669368  169515 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-239802" [e371d4fb-ef7f-4315-a068-4d6ed4b31baa] Running
	I1004 02:03:35.669372  169515 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-239802" [84bd636a-02fb-40ed-92d1-2f35e0437f21] Running
	I1004 02:03:35.669376  169515 system_pods.go:89] "kube-proxy-b5ltp" [a7299ef0-9666-4675-8397-7b3e58ac9605] Running
	I1004 02:03:35.669380  169515 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-239802" [310ed364-5785-46be-b980-27eec1d99e9d] Running
	I1004 02:03:35.669386  169515 system_pods.go:89] "metrics-server-57f55c9bc5-c5ww7" [94967866-d714-41ed-8ee2-6c7eb8db836e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1004 02:03:35.669391  169515 system_pods.go:89] "storage-provisioner" [a1341113-6631-4c74-9f66-89c883fc4e08] Running
	I1004 02:03:35.669397  169515 system_pods.go:126] duration metric: took 202.42259ms to wait for k8s-apps to be running ...
	I1004 02:03:35.669404  169515 system_svc.go:44] waiting for kubelet service to be running ....
	I1004 02:03:35.669446  169515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:03:35.685440  169515 system_svc.go:56] duration metric: took 16.022733ms WaitForService to wait for kubelet.
	I1004 02:03:35.685475  169515 kubeadm.go:581] duration metric: took 5.154237901s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1004 02:03:35.685502  169515 node_conditions.go:102] verifying NodePressure condition ...
	I1004 02:03:35.867523  169515 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1004 02:03:35.867616  169515 node_conditions.go:123] node cpu capacity is 2
	I1004 02:03:35.867645  169515 node_conditions.go:105] duration metric: took 182.13715ms to run NodePressure ...
	I1004 02:03:35.867672  169515 start.go:228] waiting for startup goroutines ...
	I1004 02:03:35.867711  169515 start.go:233] waiting for cluster config update ...
	I1004 02:03:35.867729  169515 start.go:242] writing updated cluster config ...
	I1004 02:03:35.868000  169515 ssh_runner.go:195] Run: rm -f paused
	I1004 02:03:35.921562  169515 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1004 02:03:35.924514  169515 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-239802" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:50:21 UTC, ends at Wed 2023-10-04 02:09:43 UTC. --
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.218649230Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385383218631319,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=631dbd4e-0c2d-4db1-a18a-58cdb642e2f7 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.219257076Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1be0cc62-ce91-4e85-8ce0-06a290674452 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.219306210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1be0cc62-ce91-4e85-8ce0-06a290674452 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.219459120Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1be0cc62-ce91-4e85-8ce0-06a290674452 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.265283961Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f3b67862-f01c-43c1-b944-9bfefe0f078a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.265347708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f3b67862-f01c-43c1-b944-9bfefe0f078a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.267063453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c0794097-7423-4494-b6fd-f8388528d3bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.267604816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385383267588369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c0794097-7423-4494-b6fd-f8388528d3bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.268285995Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d00ea64d-d52a-45b6-bda1-771807c5763f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.268331128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d00ea64d-d52a-45b6-bda1-771807c5763f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.268487946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d00ea64d-d52a-45b6-bda1-771807c5763f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.314269639Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=88da7eff-db53-4b8b-83ee-7fefadba2180 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.314349066Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=88da7eff-db53-4b8b-83ee-7fefadba2180 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.315918307Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ab0e3260-386a-4849-85f8-5d8656bf68a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.316278087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385383316265848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ab0e3260-386a-4849-85f8-5d8656bf68a1 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.316781495Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b4905383-f44b-49bd-ba72-5a75c7cd986f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.316826212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b4905383-f44b-49bd-ba72-5a75c7cd986f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.316979862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b4905383-f44b-49bd-ba72-5a75c7cd986f name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.359120931Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=33bbf126-9403-4b11-b848-b3f8cac333a1 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.359178948Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=33bbf126-9403-4b11-b848-b3f8cac333a1 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.361098131Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f8220afe-2dbf-4dde-8687-7354eb9dd779 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.361604879Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385383361587190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f8220afe-2dbf-4dde-8687-7354eb9dd779 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.362066069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=27ebf83d-1d8d-4608-95e7-f14c97eb89d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.362113785Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=27ebf83d-1d8d-4608-95e7-f14c97eb89d1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:09:43 old-k8s-version-107182 crio[705]: time="2023-10-04 02:09:43.362261096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2,PodSandboxId:c0a0bd64bda5f39beb23a0aab203270343248fe568755bbbeb7a7526f481d588,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696384594718615028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71715868-9727-4d70-b5b4-5f0199e0579a,},Annotations:map[string]string{io.kubernetes.container.hash: 45f2a5a5,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa,PodSandboxId:a6d1ca9ae37e86ce9fc2f21c5d69ee95413c15ad51484982d63b3660d5157ad5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1696384594079883813,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-nbf4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8965b384-aa80-4e12-8323-4129cc7b53c3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d8cde01,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35,PodSandboxId:0cf3f875255285e8bf04f79480b719f65479f66383b05a4888c83489c4cd1688,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1696384592247662309,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8lcf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50235
cdc-deb8-47a6-974a-943636afd805,},Annotations:map[string]string{io.kubernetes.container.hash: 57251e95,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818,PodSandboxId:29aea50160d272e6696ab01d42716a422c433adce2271f590e951a1b19cf3f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1696384566769314696,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dec8256181ff519fcd0206fc263f213f,},Annotations:map[s
tring]string{io.kubernetes.container.hash: d508b8fe,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12,PodSandboxId:62d1f4d364c97bd2977cc8b1fa4de634e36b0780ea50584a667f814a90f209d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1696384565812821989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8,PodSandboxId:1fa3b4f49c65206ad0379f4ea11d176123138cd28ba3d197ea9986e82512a51d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1696384565531683691,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,},Ann
otations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b,PodSandboxId:b3ac3bc12deb7b4baa83e59fef1ded5cacf0dcc7c6b71ec87e40f7d2d5de12c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1696384565392749844,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-107182,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ab9b7d9d902c2a45d50d3a2fa34072,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 8e6ab6ae,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=27ebf83d-1d8d-4608-95e7-f14c97eb89d1 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9d330530b6df7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   c0a0bd64bda5f       storage-provisioner
	13ec581cb5718       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   a6d1ca9ae37e8       coredns-5644d7b6d9-nbf4s
	cf3d049e86396       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   0cf3f87525528       kube-proxy-8lcf5
	a7a399e035861       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   29aea50160d27       etcd-old-k8s-version-107182
	438e23cb6e38e       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   62d1f4d364c97       kube-scheduler-old-k8s-version-107182
	1b2995e232648       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   1fa3b4f49c652       kube-controller-manager-old-k8s-version-107182
	65c2228b5316d       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            0                   b3ac3bc12deb7       kube-apiserver-old-k8s-version-107182
	
	* 
	* ==> coredns [13ec581cb57188df3952d578eb6f24972d9dff1ea91726f904972f92bb8fcdaa] <==
	* .:53
	2023-10-04T01:56:34.498Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-10-04T01:56:34.498Z [INFO] CoreDNS-1.6.2
	2023-10-04T01:56:34.498Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-10-04T01:56:34.518Z [INFO] 127.0.0.1:50899 - 24978 "HINFO IN 2261754534219632708.6796649709746906629. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019711849s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-107182
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-107182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=old-k8s-version-107182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T01_56_16_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 01:56:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:09:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:09:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:09:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:09:11 +0000   Wed, 04 Oct 2023 01:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.182
	  Hostname:    old-k8s-version-107182
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 7806f00958934b92827665aaf231c2c8
	 System UUID:                7806f009-5893-4b92-8276-65aaf231c2c8
	 Boot ID:                    ba7906a7-94a1-4660-9620-ac43e770ae22
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-nbf4s                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-107182                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-107182             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-107182    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-8lcf5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-107182             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-cl45r                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-107182     Node old-k8s-version-107182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-107182     Node old-k8s-version-107182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-107182     Node old-k8s-version-107182 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-107182  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:50] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.080190] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.680109] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.534569] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.166313] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.560588] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.218454] systemd-fstab-generator[630]: Ignoring "noauto" for root device
	[  +0.130081] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.181684] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.109279] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.274661] systemd-fstab-generator[689]: Ignoring "noauto" for root device
	[ +19.754098] systemd-fstab-generator[1011]: Ignoring "noauto" for root device
	[  +0.457158] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 4 01:51] kauditd_printk_skb: 13 callbacks suppressed
	[ +10.094340] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 01:56] systemd-fstab-generator[3174]: Ignoring "noauto" for root device
	[  +0.986926] kauditd_printk_skb: 8 callbacks suppressed
	[ +40.176446] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [a7a399e035861b111f2ddc0b0e8a856fd20b12b0fed330a1c8ce883064181818] <==
	* 2023-10-04 01:56:06.944804 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-04 01:56:06.945877 I | etcdserver: ff4c26660998c2c8 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-04 01:56:06.946216 I | etcdserver/membership: added member ff4c26660998c2c8 [https://192.168.72.182:2380] to cluster 1c15affd5c0f3dba
	2023-10-04 01:56:06.948060 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-04 01:56:06.948416 I | embed: listening for metrics on http://192.168.72.182:2381
	2023-10-04 01:56:06.948735 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-04 01:56:07.932454 I | raft: ff4c26660998c2c8 is starting a new election at term 1
	2023-10-04 01:56:07.932670 I | raft: ff4c26660998c2c8 became candidate at term 2
	2023-10-04 01:56:07.932712 I | raft: ff4c26660998c2c8 received MsgVoteResp from ff4c26660998c2c8 at term 2
	2023-10-04 01:56:07.932746 I | raft: ff4c26660998c2c8 became leader at term 2
	2023-10-04 01:56:07.932771 I | raft: raft.node: ff4c26660998c2c8 elected leader ff4c26660998c2c8 at term 2
	2023-10-04 01:56:07.933148 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-04 01:56:07.934860 I | etcdserver: published {Name:old-k8s-version-107182 ClientURLs:[https://192.168.72.182:2379]} to cluster 1c15affd5c0f3dba
	2023-10-04 01:56:07.934915 I | embed: ready to serve client requests
	2023-10-04 01:56:07.936437 I | embed: serving client requests on 192.168.72.182:2379
	2023-10-04 01:56:07.940760 I | embed: ready to serve client requests
	2023-10-04 01:56:07.942150 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-04 01:56:07.952916 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-04 01:56:07.953013 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-04 01:56:33.395838 W | etcdserver: read-only range request "key:\"/registry/storageclasses/standard\" " with result "range_response_count:0 size:5" took too long (102.845874ms) to execute
	2023-10-04 01:56:33.417678 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-nbf4s\" " with result "range_response_count:1 size:1367" took too long (113.021159ms) to execute
	2023-10-04 01:58:24.847894 W | etcdserver: request "header:<ID:14035621038172576277 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.182\" mod_revision:532 > success:<request_put:<key:\"/registry/masterleases/192.168.72.182\" value_size:69 lease:4812249001317800467 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.182\" > >>" with result "size:16" took too long (347.589649ms) to execute
	2023-10-04 01:58:25.233408 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:0 size:5" took too long (225.416279ms) to execute
	2023-10-04 02:06:07.979604 I | mvcc: store.index: compact 663
	2023-10-04 02:06:07.982023 I | mvcc: finished scheduled compaction at 663 (took 1.927459ms)
	
	* 
	* ==> kernel <==
	*  02:09:43 up 19 min,  0 users,  load average: 0.13, 0.18, 0.23
	Linux old-k8s-version-107182 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [65c2228b5316dff2fa26b2c5c726cc15ab6fd3cf10507c96a09c6ead283d2f3b] <==
	* I1004 02:02:12.197629       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:02:12.197940       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:02:12.198006       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:02:12.198028       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:04:12.198871       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:04:12.198991       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:04:12.199064       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:04:12.199075       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:06:12.199402       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:06:12.199649       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:06:12.199743       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:06:12.199755       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:07:12.200795       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:07:12.201204       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:07:12.201612       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:07:12.201631       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:09:12.203229       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1004 02:09:12.208303       1 handler_proxy.go:99] no RequestInfo found in the context
	E1004 02:09:12.208624       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:09:12.208670       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [1b2995e23264878e276e136dc1fba81f558426f5cb0be5c497eb2e4386cbe5b8] <==
	* W1004 02:03:28.598992       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:03:35.698225       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:04:00.601284       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:04:05.950113       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:04:32.604133       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:04:36.207094       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:05:04.606868       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:05:06.459388       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:05:36.608983       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:05:36.711879       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1004 02:06:06.964139       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:06:08.611213       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:06:37.216159       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:06:40.613371       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:07:07.468644       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:07:12.615317       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:07:37.720964       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:07:44.617298       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:08:07.973776       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:08:16.619141       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:08:38.226291       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:08:48.621405       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:09:08.478416       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1004 02:09:20.623749       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1004 02:09:38.730895       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [cf3d049e863960102528f9d9c386950441bde9859deb97fa601af58a50586f35] <==
	* W1004 01:56:33.542599       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1004 01:56:33.576578       1 node.go:135] Successfully retrieved node IP: 192.168.72.182
	I1004 01:56:33.576646       1 server_others.go:149] Using iptables Proxier.
	I1004 01:56:33.600565       1 server.go:529] Version: v1.16.0
	I1004 01:56:33.642977       1 config.go:313] Starting service config controller
	I1004 01:56:33.643180       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1004 01:56:33.663985       1 config.go:131] Starting endpoints config controller
	I1004 01:56:33.664063       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1004 01:56:33.771141       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1004 01:56:33.772460       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [438e23cb6e38e44e102a24d932f5a0620d4ef6e9ce4b826e5ff0334008c31a12] <==
	* I1004 01:56:11.204389       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1004 01:56:11.204857       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1004 01:56:11.252103       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:56:11.270727       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:56:11.271187       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 01:56:11.271481       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 01:56:11.271671       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:56:11.271878       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 01:56:11.273455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:11.273735       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:56:11.273837       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:56:11.275703       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:11.283631       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:56:12.264742       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 01:56:12.277302       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 01:56:12.277739       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 01:56:12.282034       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 01:56:12.282133       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 01:56:12.283090       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 01:56:12.287979       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:12.288075       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 01:56:12.288158       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1004 01:56:12.288210       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 01:56:12.289839       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 01:56:31.359736       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:50:21 UTC, ends at Wed 2023-10-04 02:09:43 UTC. --
	Oct 04 02:05:16 old-k8s-version-107182 kubelet[3180]: E1004 02:05:16.202612    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:28 old-k8s-version-107182 kubelet[3180]: E1004 02:05:28.202273    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:43 old-k8s-version-107182 kubelet[3180]: E1004 02:05:43.202988    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:05:57 old-k8s-version-107182 kubelet[3180]: E1004 02:05:57.202173    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:04 old-k8s-version-107182 kubelet[3180]: E1004 02:06:04.284308    3180 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Oct 04 02:06:11 old-k8s-version-107182 kubelet[3180]: E1004 02:06:11.202626    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:24 old-k8s-version-107182 kubelet[3180]: E1004 02:06:24.202346    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:38 old-k8s-version-107182 kubelet[3180]: E1004 02:06:38.202151    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:06:52 old-k8s-version-107182 kubelet[3180]: E1004 02:06:52.204820    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:07:07 old-k8s-version-107182 kubelet[3180]: E1004 02:07:07.202476    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:07:18 old-k8s-version-107182 kubelet[3180]: E1004 02:07:18.235398    3180 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:07:18 old-k8s-version-107182 kubelet[3180]: E1004 02:07:18.235571    3180 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:07:18 old-k8s-version-107182 kubelet[3180]: E1004 02:07:18.235642    3180 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:07:18 old-k8s-version-107182 kubelet[3180]: E1004 02:07:18.235679    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Oct 04 02:07:31 old-k8s-version-107182 kubelet[3180]: E1004 02:07:31.202592    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:07:42 old-k8s-version-107182 kubelet[3180]: E1004 02:07:42.203627    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:07:54 old-k8s-version-107182 kubelet[3180]: E1004 02:07:54.204495    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:08:08 old-k8s-version-107182 kubelet[3180]: E1004 02:08:08.203343    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:08:21 old-k8s-version-107182 kubelet[3180]: E1004 02:08:21.202590    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:08:35 old-k8s-version-107182 kubelet[3180]: E1004 02:08:35.202601    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:08:48 old-k8s-version-107182 kubelet[3180]: E1004 02:08:48.202199    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:09:00 old-k8s-version-107182 kubelet[3180]: E1004 02:09:00.202440    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:09:13 old-k8s-version-107182 kubelet[3180]: E1004 02:09:13.202219    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:09:28 old-k8s-version-107182 kubelet[3180]: E1004 02:09:28.202114    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 04 02:09:41 old-k8s-version-107182 kubelet[3180]: E1004 02:09:41.202493    3180 pod_workers.go:191] Error syncing pod 93297548-dde0-4cd3-b47f-a2a867cca7c4 ("metrics-server-74d5856cc6-cl45r_kube-system(93297548-dde0-4cd3-b47f-a2a867cca7c4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [9d330530b6df79f28617a2d848a6388820c421800e5d2448e06efc760749ccd2] <==
	* I1004 01:56:34.832446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 01:56:34.846127       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 01:56:34.846215       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 01:56:34.857939       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 01:56:34.858123       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-107182_f09f8e1e-3490-4d20-ae99-2574c1050795!
	I1004 01:56:34.861406       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1aefe08-19f6-4f35-bb5e-129713d0fae4", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-107182_f09f8e1e-3490-4d20-ae99-2574c1050795 became leader
	I1004 01:56:34.958464       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-107182_f09f8e1e-3490-4d20-ae99-2574c1050795!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-107182 -n old-k8s-version-107182
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-107182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-cl45r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-107182 describe pod metrics-server-74d5856cc6-cl45r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-107182 describe pod metrics-server-74d5856cc6-cl45r: exit status 1 (70.269646ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-cl45r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-107182 describe pod metrics-server-74d5856cc6-cl45r: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (184.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (143.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1004 02:12:40.132774  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/no-preload-273516/client.crt: no such file or directory
E1004 02:12:45.424215  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-10-04 02:14:59.710599543 +0000 UTC m=+5492.081630580
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-239802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.042µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-239802 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-239802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-239802 logs -n 25: (1.446447054s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo iptables -t nat -L -n -v                        |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl status kubelet                        |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl cat kubelet                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo journalctl -xeu kubelet                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC |                     |
	|         | sudo systemctl status docker                         |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl cat docker                            |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC |                     |
	|         | sudo docker system info                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | cri-docker --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl cat cri-docker                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116 sudo cat                | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116 sudo cat                | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo cri-dockerd --version                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC |                     |
	|         | sudo systemctl status                                |                           |         |         |                     |                     |
	|         | containerd --all --full                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl cat containerd                        |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116 sudo cat                | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo cat                                             |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo containerd config dump                          |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl status crio                           |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo systemctl cat crio                              |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo find /etc/crio -type f                          |                           |         |         |                     |                     |
	|         | -exec sh -c 'echo {}; cat {}'                        |                           |         |         |                     |                     |
	|         | \;                                                   |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:13 UTC |
	|         | sudo crio config                                     |                           |         |         |                     |                     |
	| delete  | -p enable-default-cni-171116                         | enable-default-cni-171116 | jenkins | v1.31.2 | 04 Oct 23 02:13 UTC | 04 Oct 23 02:14 UTC |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 02:13:45
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 02:13:45.820919  180785 out.go:296] Setting OutFile to fd 1 ...
	I1004 02:13:45.821355  180785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:13:45.821404  180785 out.go:309] Setting ErrFile to fd 2...
	I1004 02:13:45.821422  180785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 02:13:45.821977  180785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 02:13:45.823213  180785 out.go:303] Setting JSON to false
	I1004 02:13:45.824766  180785 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10577,"bootTime":1696375049,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 02:13:45.824853  180785 start.go:138] virtualization: kvm guest
	I1004 02:13:45.827239  180785 out.go:177] * [bridge-171116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 02:13:45.829054  180785 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 02:13:45.829056  180785 notify.go:220] Checking for updates...
	I1004 02:13:45.830744  180785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 02:13:45.832429  180785 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:13:45.834614  180785 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:13:45.836192  180785 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 02:13:45.837782  180785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 02:13:45.839836  180785 config.go:182] Loaded profile config "default-k8s-diff-port-239802": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:13:45.839978  180785 config.go:182] Loaded profile config "enable-default-cni-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:13:45.840113  180785 config.go:182] Loaded profile config "flannel-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:13:45.840232  180785 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 02:13:45.893984  180785 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 02:13:45.896267  180785 start.go:298] selected driver: kvm2
	I1004 02:13:45.896286  180785 start.go:902] validating driver "kvm2" against <nil>
	I1004 02:13:45.896303  180785 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 02:13:45.897332  180785 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:13:45.897434  180785 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 02:13:45.915600  180785 install.go:137] /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2 version is 1.31.2
	I1004 02:13:45.915652  180785 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 02:13:45.915857  180785 start_flags.go:923] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1004 02:13:45.915894  180785 cni.go:84] Creating CNI manager for "bridge"
	I1004 02:13:45.915903  180785 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 02:13:45.915911  180785 start_flags.go:321] config:
	{Name:bridge-171116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:bridge-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:13:45.916071  180785 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 02:13:45.918327  180785 out.go:177] * Starting control plane node bridge-171116 in cluster bridge-171116
	I1004 02:13:42.384679  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:42.385183  179277 main.go:141] libmachine: (flannel-171116) DBG | unable to find current IP address of domain flannel-171116 in network mk-flannel-171116
	I1004 02:13:42.385212  179277 main.go:141] libmachine: (flannel-171116) DBG | I1004 02:13:42.385097  179300 retry.go:31] will retry after 1.75577141s: waiting for machine to come up
	I1004 02:13:44.143075  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:44.143429  179277 main.go:141] libmachine: (flannel-171116) DBG | unable to find current IP address of domain flannel-171116 in network mk-flannel-171116
	I1004 02:13:44.143448  179277 main.go:141] libmachine: (flannel-171116) DBG | I1004 02:13:44.143419  179300 retry.go:31] will retry after 3.358513041s: waiting for machine to come up
	I1004 02:13:45.920053  180785 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:13:45.920105  180785 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 02:13:45.920115  180785 cache.go:57] Caching tarball of preloaded images
	I1004 02:13:45.920216  180785 preload.go:174] Found /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1004 02:13:45.920231  180785 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 02:13:45.920355  180785 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/config.json ...
	I1004 02:13:45.920378  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/config.json: {Name:mkd9f729df8c3c63d036eb6b80773027ba72005b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:13:45.920529  180785 start.go:365] acquiring machines lock for bridge-171116: {Name:mk3b62daeec8028bbfafd6e73cbab8c6a0834ae4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1004 02:13:47.504046  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:47.504504  179277 main.go:141] libmachine: (flannel-171116) DBG | unable to find current IP address of domain flannel-171116 in network mk-flannel-171116
	I1004 02:13:47.504532  179277 main.go:141] libmachine: (flannel-171116) DBG | I1004 02:13:47.504462  179300 retry.go:31] will retry after 3.808486408s: waiting for machine to come up
	I1004 02:13:51.316043  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:51.316632  179277 main.go:141] libmachine: (flannel-171116) DBG | unable to find current IP address of domain flannel-171116 in network mk-flannel-171116
	I1004 02:13:51.316664  179277 main.go:141] libmachine: (flannel-171116) DBG | I1004 02:13:51.316585  179300 retry.go:31] will retry after 4.486270417s: waiting for machine to come up
	I1004 02:13:55.807594  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:55.808154  179277 main.go:141] libmachine: (flannel-171116) Found IP for machine: 192.168.50.26
	I1004 02:13:55.808184  179277 main.go:141] libmachine: (flannel-171116) Reserving static IP address...
	I1004 02:13:55.808202  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has current primary IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:55.808488  179277 main.go:141] libmachine: (flannel-171116) DBG | unable to find host DHCP lease matching {name: "flannel-171116", mac: "52:54:00:a2:b1:18", ip: "192.168.50.26"} in network mk-flannel-171116
	I1004 02:13:55.894830  179277 main.go:141] libmachine: (flannel-171116) DBG | Getting to WaitForSSH function...
	I1004 02:13:55.894871  179277 main.go:141] libmachine: (flannel-171116) Reserved static IP address: 192.168.50.26
	I1004 02:13:55.894886  179277 main.go:141] libmachine: (flannel-171116) Waiting for SSH to be available...
	I1004 02:13:55.897913  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:55.898380  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:55.898411  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:55.898575  179277 main.go:141] libmachine: (flannel-171116) DBG | Using SSH client type: external
	I1004 02:13:55.898609  179277 main.go:141] libmachine: (flannel-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa (-rw-------)
	I1004 02:13:55.898644  179277 main.go:141] libmachine: (flannel-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.26 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:13:55.898662  179277 main.go:141] libmachine: (flannel-171116) DBG | About to run SSH command:
	I1004 02:13:55.898677  179277 main.go:141] libmachine: (flannel-171116) DBG | exit 0
	I1004 02:13:55.990820  179277 main.go:141] libmachine: (flannel-171116) DBG | SSH cmd err, output: <nil>: 
	I1004 02:13:55.991118  179277 main.go:141] libmachine: (flannel-171116) KVM machine creation complete!
	I1004 02:13:55.991412  179277 main.go:141] libmachine: (flannel-171116) Calling .GetConfigRaw
	I1004 02:13:55.991917  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:55.992111  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:55.992252  179277 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:13:55.992268  179277 main.go:141] libmachine: (flannel-171116) Calling .GetState
	I1004 02:13:55.993684  179277 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:13:55.993699  179277 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:13:55.993705  179277 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:13:55.993712  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:55.996083  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:55.996473  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:55.996508  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:55.996586  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:55.996765  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:55.996939  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:55.997117  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:55.997289  179277 main.go:141] libmachine: Using SSH client type: native
	I1004 02:13:55.997669  179277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.50.26 22 <nil> <nil>}
	I1004 02:13:55.997683  179277 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:13:56.109545  179277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:13:56.109571  179277 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:13:56.109583  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:56.113020  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.113461  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.113495  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.113684  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:56.114108  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.114315  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.114499  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:56.114707  179277 main.go:141] libmachine: Using SSH client type: native
	I1004 02:13:56.115186  179277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.50.26 22 <nil> <nil>}
	I1004 02:13:56.115207  179277 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:13:56.231042  179277 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 02:13:56.231122  179277 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:13:56.231133  179277 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:13:56.231142  179277 main.go:141] libmachine: (flannel-171116) Calling .GetMachineName
	I1004 02:13:56.231431  179277 buildroot.go:166] provisioning hostname "flannel-171116"
	I1004 02:13:56.231447  179277 main.go:141] libmachine: (flannel-171116) Calling .GetMachineName
	I1004 02:13:56.231635  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:56.234363  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.234728  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.234763  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.234895  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:56.235110  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.235287  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.235460  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:56.235674  179277 main.go:141] libmachine: Using SSH client type: native
	I1004 02:13:56.236027  179277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.50.26 22 <nil> <nil>}
	I1004 02:13:56.236043  179277 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-171116 && echo "flannel-171116" | sudo tee /etc/hostname
	I1004 02:13:56.371831  179277 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-171116
	
	I1004 02:13:56.371859  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:56.377115  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.378713  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.378774  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.379084  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:56.379284  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.379464  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.379603  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:56.379773  179277 main.go:141] libmachine: Using SSH client type: native
	I1004 02:13:56.380280  179277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.50.26 22 <nil> <nil>}
	I1004 02:13:56.380306  179277 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-171116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-171116/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-171116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:13:56.513132  179277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:13:56.513166  179277 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 02:13:56.513193  179277 buildroot.go:174] setting up certificates
	I1004 02:13:56.513204  179277 provision.go:83] configureAuth start
	I1004 02:13:56.513218  179277 main.go:141] libmachine: (flannel-171116) Calling .GetMachineName
	I1004 02:13:56.513587  179277 main.go:141] libmachine: (flannel-171116) Calling .GetIP
	I1004 02:13:56.517208  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.517595  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.517625  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.517874  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:56.520520  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.520979  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.521014  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.521137  179277 provision.go:138] copyHostCerts
	I1004 02:13:56.521191  179277 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 02:13:56.521204  179277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 02:13:56.521282  179277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 02:13:56.521398  179277 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 02:13:56.521412  179277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 02:13:56.521452  179277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 02:13:56.521529  179277 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 02:13:56.521540  179277 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 02:13:56.521573  179277 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 02:13:56.521646  179277 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.flannel-171116 san=[192.168.50.26 192.168.50.26 localhost 127.0.0.1 minikube flannel-171116]
	I1004 02:13:57.588150  180785 start.go:369] acquired machines lock for "bridge-171116" in 11.667589105s
	I1004 02:13:57.588233  180785 start.go:93] Provisioning new machine with config: &{Name:bridge-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:bridge-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:13:57.588369  180785 start.go:125] createHost starting for "" (driver="kvm2")
	I1004 02:13:56.802073  179277 provision.go:172] copyRemoteCerts
	I1004 02:13:56.802139  179277 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:13:56.802168  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:56.804462  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.804817  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.804849  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.804998  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:56.805183  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.805351  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:56.805526  179277 sshutil.go:53] new ssh client: &{IP:192.168.50.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa Username:docker}
	I1004 02:13:56.892261  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1004 02:13:56.918876  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 02:13:56.945373  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1004 02:13:56.973768  179277 provision.go:86] duration metric: configureAuth took 460.546409ms
	I1004 02:13:56.973790  179277 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:13:56.973993  179277 config.go:182] Loaded profile config "flannel-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:13:56.974099  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:56.977042  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.978422  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:56.978455  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:56.978828  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:56.979062  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.979282  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:56.979447  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:56.979624  179277 main.go:141] libmachine: Using SSH client type: native
	I1004 02:13:56.979933  179277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.50.26 22 <nil> <nil>}
	I1004 02:13:56.979965  179277 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:13:57.320586  179277 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:13:57.320623  179277 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:13:57.320637  179277 main.go:141] libmachine: (flannel-171116) Calling .GetURL
	I1004 02:13:57.322181  179277 main.go:141] libmachine: (flannel-171116) DBG | Using libvirt version 6000000
	I1004 02:13:57.324741  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.325135  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.325170  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.325342  179277 main.go:141] libmachine: Docker is up and running!
	I1004 02:13:57.325355  179277 main.go:141] libmachine: Reticulating splines...
	I1004 02:13:57.325365  179277 client.go:171] LocalClient.Create took 25.580404797s
	I1004 02:13:57.325393  179277 start.go:167] duration metric: libmachine.API.Create for "flannel-171116" took 25.5804697s
	I1004 02:13:57.325405  179277 start.go:300] post-start starting for "flannel-171116" (driver="kvm2")
	I1004 02:13:57.325420  179277 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:13:57.325449  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:57.325728  179277 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:13:57.325763  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:57.328247  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.328686  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.328719  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.328888  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:57.329090  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:57.329269  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:57.329439  179277 sshutil.go:53] new ssh client: &{IP:192.168.50.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa Username:docker}
	I1004 02:13:57.421528  179277 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:13:57.426539  179277 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 02:13:57.426560  179277 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 02:13:57.426628  179277 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 02:13:57.426728  179277 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 02:13:57.426842  179277 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 02:13:57.436991  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:13:57.461705  179277 start.go:303] post-start completed in 136.283169ms
	I1004 02:13:57.461765  179277 main.go:141] libmachine: (flannel-171116) Calling .GetConfigRaw
	I1004 02:13:57.462395  179277 main.go:141] libmachine: (flannel-171116) Calling .GetIP
	I1004 02:13:57.465434  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.465950  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.465989  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.466272  179277 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/config.json ...
	I1004 02:13:57.466465  179277 start.go:128] duration metric: createHost completed in 25.742894016s
	I1004 02:13:57.466505  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:57.469018  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.469353  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.469383  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.469507  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:57.469682  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:57.469827  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:57.469978  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:57.470136  179277 main.go:141] libmachine: Using SSH client type: native
	I1004 02:13:57.470599  179277 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.50.26 22 <nil> <nil>}
	I1004 02:13:57.470623  179277 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 02:13:57.587979  179277 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696385637.556615348
	
	I1004 02:13:57.588012  179277 fix.go:206] guest clock: 1696385637.556615348
	I1004 02:13:57.588022  179277 fix.go:219] Guest: 2023-10-04 02:13:57.556615348 +0000 UTC Remote: 2023-10-04 02:13:57.466489415 +0000 UTC m=+25.872966883 (delta=90.125933ms)
	I1004 02:13:57.588050  179277 fix.go:190] guest clock delta is within tolerance: 90.125933ms
	I1004 02:13:57.588058  179277 start.go:83] releasing machines lock for "flannel-171116", held for 25.864591863s
	I1004 02:13:57.588092  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:57.588437  179277 main.go:141] libmachine: (flannel-171116) Calling .GetIP
	I1004 02:13:57.591913  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.592380  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.592428  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.592625  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:57.593225  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:57.593457  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:13:57.593527  179277 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:13:57.593579  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:57.593722  179277 ssh_runner.go:195] Run: cat /version.json
	I1004 02:13:57.593749  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:13:57.599679  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.600103  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.600131  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.600155  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.600393  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:57.600586  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:57.600659  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:57.600686  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:57.600735  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:57.600889  179277 sshutil.go:53] new ssh client: &{IP:192.168.50.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa Username:docker}
	I1004 02:13:57.600918  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:13:57.601121  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:13:57.601261  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:13:57.601443  179277 sshutil.go:53] new ssh client: &{IP:192.168.50.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa Username:docker}
	I1004 02:13:57.687681  179277 ssh_runner.go:195] Run: systemctl --version
	I1004 02:13:57.716179  179277 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:13:57.885736  179277 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:13:57.892661  179277 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:13:57.892735  179277 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:13:57.916482  179277 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:13:57.916510  179277 start.go:469] detecting cgroup driver to use...
	I1004 02:13:57.916580  179277 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:13:57.937139  179277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:13:57.954305  179277 docker.go:197] disabling cri-docker service (if available) ...
	I1004 02:13:57.954454  179277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:13:57.971312  179277 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:13:57.988011  179277 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:13:58.128196  179277 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:13:58.282311  179277 docker.go:213] disabling docker service ...
	I1004 02:13:58.282559  179277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:13:58.296632  179277 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:13:58.311047  179277 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:13:58.442388  179277 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:13:58.576055  179277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:13:58.590160  179277 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:13:58.609752  179277 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 02:13:58.609818  179277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:13:58.621694  179277 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:13:58.621770  179277 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:13:58.634350  179277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:13:58.646208  179277 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:13:58.657675  179277 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:13:58.671221  179277 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:13:58.682387  179277 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:13:58.682459  179277 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:13:58.698448  179277 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:13:58.709052  179277 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:13:58.853568  179277 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:13:59.067033  179277 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:13:59.067098  179277 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:13:59.078460  179277 start.go:537] Will wait 60s for crictl version
	I1004 02:13:59.078523  179277 ssh_runner.go:195] Run: which crictl
	I1004 02:13:59.083932  179277 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:13:59.129386  179277 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 02:13:59.129468  179277 ssh_runner.go:195] Run: crio --version
	I1004 02:13:59.189676  179277 ssh_runner.go:195] Run: crio --version
	I1004 02:13:59.244345  179277 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 02:13:57.590688  180785 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1004 02:13:57.590892  180785 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:13:57.590945  180785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:13:57.609630  180785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43643
	I1004 02:13:57.610194  180785 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:13:57.610925  180785 main.go:141] libmachine: Using API Version  1
	I1004 02:13:57.610949  180785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:13:57.611341  180785 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:13:57.611547  180785 main.go:141] libmachine: (bridge-171116) Calling .GetMachineName
	I1004 02:13:57.611738  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:13:57.611896  180785 start.go:159] libmachine.API.Create for "bridge-171116" (driver="kvm2")
	I1004 02:13:57.611934  180785 client.go:168] LocalClient.Create starting
	I1004 02:13:57.611986  180785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem
	I1004 02:13:57.612028  180785 main.go:141] libmachine: Decoding PEM data...
	I1004 02:13:57.612053  180785 main.go:141] libmachine: Parsing certificate...
	I1004 02:13:57.612121  180785 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem
	I1004 02:13:57.612152  180785 main.go:141] libmachine: Decoding PEM data...
	I1004 02:13:57.612166  180785 main.go:141] libmachine: Parsing certificate...
	I1004 02:13:57.612190  180785 main.go:141] libmachine: Running pre-create checks...
	I1004 02:13:57.612206  180785 main.go:141] libmachine: (bridge-171116) Calling .PreCreateCheck
	I1004 02:13:57.612772  180785 main.go:141] libmachine: (bridge-171116) Calling .GetConfigRaw
	I1004 02:13:57.613269  180785 main.go:141] libmachine: Creating machine...
	I1004 02:13:57.613288  180785 main.go:141] libmachine: (bridge-171116) Calling .Create
	I1004 02:13:57.613469  180785 main.go:141] libmachine: (bridge-171116) Creating KVM machine...
	I1004 02:13:57.615246  180785 main.go:141] libmachine: (bridge-171116) DBG | found existing default KVM network
	I1004 02:13:57.617062  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:57.616869  181886 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:00:bb:21} reservation:<nil>}
	I1004 02:13:57.618711  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:57.618626  181886 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:a9:08:1d} reservation:<nil>}
	I1004 02:13:57.619948  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:57.619848  181886 network.go:214] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr5 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:b0:63} reservation:<nil>}
	I1004 02:13:57.623170  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:57.623090  181886 network.go:209] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010efc0}
	I1004 02:13:57.629335  180785 main.go:141] libmachine: (bridge-171116) DBG | trying to create private KVM network mk-bridge-171116 192.168.72.0/24...
	I1004 02:13:57.723843  180785 main.go:141] libmachine: (bridge-171116) DBG | private KVM network mk-bridge-171116 192.168.72.0/24 created
	I1004 02:13:57.723900  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:57.723794  181886 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:13:57.723918  180785 main.go:141] libmachine: (bridge-171116) Setting up store path in /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116 ...
	I1004 02:13:57.723960  180785 main.go:141] libmachine: (bridge-171116) Building disk image from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 02:13:57.723990  180785 main.go:141] libmachine: (bridge-171116) Downloading /home/jenkins/minikube-integration/17348-128338/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1004 02:13:57.977636  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:57.977438  181886 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa...
	I1004 02:13:58.076986  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:58.076867  181886 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/bridge-171116.rawdisk...
	I1004 02:13:58.077017  180785 main.go:141] libmachine: (bridge-171116) DBG | Writing magic tar header
	I1004 02:13:58.077033  180785 main.go:141] libmachine: (bridge-171116) DBG | Writing SSH key tar header
	I1004 02:13:58.077054  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:13:58.076980  181886 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116 ...
	I1004 02:13:58.077161  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116
	I1004 02:13:58.077202  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube/machines
	I1004 02:13:58.077220  180785 main.go:141] libmachine: (bridge-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116 (perms=drwx------)
	I1004 02:13:58.077238  180785 main.go:141] libmachine: (bridge-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube/machines (perms=drwxr-xr-x)
	I1004 02:13:58.077249  180785 main.go:141] libmachine: (bridge-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338/.minikube (perms=drwxr-xr-x)
	I1004 02:13:58.077267  180785 main.go:141] libmachine: (bridge-171116) Setting executable bit set on /home/jenkins/minikube-integration/17348-128338 (perms=drwxrwxr-x)
	I1004 02:13:58.077285  180785 main.go:141] libmachine: (bridge-171116) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1004 02:13:58.077303  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 02:13:58.077324  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17348-128338
	I1004 02:13:58.077335  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1004 02:13:58.077342  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home/jenkins
	I1004 02:13:58.077352  180785 main.go:141] libmachine: (bridge-171116) DBG | Checking permissions on dir: /home
	I1004 02:13:58.077383  180785 main.go:141] libmachine: (bridge-171116) DBG | Skipping /home - not owner
	I1004 02:13:58.077406  180785 main.go:141] libmachine: (bridge-171116) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1004 02:13:58.077421  180785 main.go:141] libmachine: (bridge-171116) Creating domain...
	I1004 02:13:58.078442  180785 main.go:141] libmachine: (bridge-171116) define libvirt domain using xml: 
	I1004 02:13:58.078470  180785 main.go:141] libmachine: (bridge-171116) <domain type='kvm'>
	I1004 02:13:58.078484  180785 main.go:141] libmachine: (bridge-171116)   <name>bridge-171116</name>
	I1004 02:13:58.078495  180785 main.go:141] libmachine: (bridge-171116)   <memory unit='MiB'>3072</memory>
	I1004 02:13:58.078512  180785 main.go:141] libmachine: (bridge-171116)   <vcpu>2</vcpu>
	I1004 02:13:58.078528  180785 main.go:141] libmachine: (bridge-171116)   <features>
	I1004 02:13:58.078545  180785 main.go:141] libmachine: (bridge-171116)     <acpi/>
	I1004 02:13:58.078557  180785 main.go:141] libmachine: (bridge-171116)     <apic/>
	I1004 02:13:58.078580  180785 main.go:141] libmachine: (bridge-171116)     <pae/>
	I1004 02:13:58.078604  180785 main.go:141] libmachine: (bridge-171116)     
	I1004 02:13:58.078615  180785 main.go:141] libmachine: (bridge-171116)   </features>
	I1004 02:13:58.078633  180785 main.go:141] libmachine: (bridge-171116)   <cpu mode='host-passthrough'>
	I1004 02:13:58.078654  180785 main.go:141] libmachine: (bridge-171116)   
	I1004 02:13:58.078669  180785 main.go:141] libmachine: (bridge-171116)   </cpu>
	I1004 02:13:58.078682  180785 main.go:141] libmachine: (bridge-171116)   <os>
	I1004 02:13:58.078697  180785 main.go:141] libmachine: (bridge-171116)     <type>hvm</type>
	I1004 02:13:58.078705  180785 main.go:141] libmachine: (bridge-171116)     <boot dev='cdrom'/>
	I1004 02:13:58.078717  180785 main.go:141] libmachine: (bridge-171116)     <boot dev='hd'/>
	I1004 02:13:58.078736  180785 main.go:141] libmachine: (bridge-171116)     <bootmenu enable='no'/>
	I1004 02:13:58.078752  180785 main.go:141] libmachine: (bridge-171116)   </os>
	I1004 02:13:58.078765  180785 main.go:141] libmachine: (bridge-171116)   <devices>
	I1004 02:13:58.078782  180785 main.go:141] libmachine: (bridge-171116)     <disk type='file' device='cdrom'>
	I1004 02:13:58.078801  180785 main.go:141] libmachine: (bridge-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/boot2docker.iso'/>
	I1004 02:13:58.078896  180785 main.go:141] libmachine: (bridge-171116)       <target dev='hdc' bus='scsi'/>
	I1004 02:13:58.078933  180785 main.go:141] libmachine: (bridge-171116)       <readonly/>
	I1004 02:13:58.078949  180785 main.go:141] libmachine: (bridge-171116)     </disk>
	I1004 02:13:58.078963  180785 main.go:141] libmachine: (bridge-171116)     <disk type='file' device='disk'>
	I1004 02:13:58.078978  180785 main.go:141] libmachine: (bridge-171116)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1004 02:13:58.079309  180785 main.go:141] libmachine: (bridge-171116)       <source file='/home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/bridge-171116.rawdisk'/>
	I1004 02:13:58.079357  180785 main.go:141] libmachine: (bridge-171116)       <target dev='hda' bus='virtio'/>
	I1004 02:13:58.079374  180785 main.go:141] libmachine: (bridge-171116)     </disk>
	I1004 02:13:58.079394  180785 main.go:141] libmachine: (bridge-171116)     <interface type='network'>
	I1004 02:13:58.079407  180785 main.go:141] libmachine: (bridge-171116)       <source network='mk-bridge-171116'/>
	I1004 02:13:58.079418  180785 main.go:141] libmachine: (bridge-171116)       <model type='virtio'/>
	I1004 02:13:58.079435  180785 main.go:141] libmachine: (bridge-171116)     </interface>
	I1004 02:13:58.079445  180785 main.go:141] libmachine: (bridge-171116)     <interface type='network'>
	I1004 02:13:58.079461  180785 main.go:141] libmachine: (bridge-171116)       <source network='default'/>
	I1004 02:13:58.079476  180785 main.go:141] libmachine: (bridge-171116)       <model type='virtio'/>
	I1004 02:13:58.079487  180785 main.go:141] libmachine: (bridge-171116)     </interface>
	I1004 02:13:58.079502  180785 main.go:141] libmachine: (bridge-171116)     <serial type='pty'>
	I1004 02:13:58.079514  180785 main.go:141] libmachine: (bridge-171116)       <target port='0'/>
	I1004 02:13:58.079527  180785 main.go:141] libmachine: (bridge-171116)     </serial>
	I1004 02:13:58.079543  180785 main.go:141] libmachine: (bridge-171116)     <console type='pty'>
	I1004 02:13:58.079562  180785 main.go:141] libmachine: (bridge-171116)       <target type='serial' port='0'/>
	I1004 02:13:58.079578  180785 main.go:141] libmachine: (bridge-171116)     </console>
	I1004 02:13:58.079588  180785 main.go:141] libmachine: (bridge-171116)     <rng model='virtio'>
	I1004 02:13:58.079602  180785 main.go:141] libmachine: (bridge-171116)       <backend model='random'>/dev/random</backend>
	I1004 02:13:58.079617  180785 main.go:141] libmachine: (bridge-171116)     </rng>
	I1004 02:13:58.079628  180785 main.go:141] libmachine: (bridge-171116)     
	I1004 02:13:58.079638  180785 main.go:141] libmachine: (bridge-171116)     
	I1004 02:13:58.079654  180785 main.go:141] libmachine: (bridge-171116)   </devices>
	I1004 02:13:58.079663  180785 main.go:141] libmachine: (bridge-171116) </domain>
	I1004 02:13:58.079681  180785 main.go:141] libmachine: (bridge-171116) 
	I1004 02:13:58.084354  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:de:45:f1 in network default
	I1004 02:13:58.084949  180785 main.go:141] libmachine: (bridge-171116) Ensuring networks are active...
	I1004 02:13:58.084978  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:13:58.085747  180785 main.go:141] libmachine: (bridge-171116) Ensuring network default is active
	I1004 02:13:58.086190  180785 main.go:141] libmachine: (bridge-171116) Ensuring network mk-bridge-171116 is active
	I1004 02:13:58.086730  180785 main.go:141] libmachine: (bridge-171116) Getting domain xml...
	I1004 02:13:58.087451  180785 main.go:141] libmachine: (bridge-171116) Creating domain...
	I1004 02:14:00.152498  180785 main.go:141] libmachine: (bridge-171116) Waiting to get IP...
	I1004 02:14:00.153544  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:00.154112  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:00.154141  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:00.154107  181886 retry.go:31] will retry after 212.12695ms: waiting for machine to come up
	I1004 02:14:00.367803  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:00.368587  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:00.368617  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:00.368528  181886 retry.go:31] will retry after 240.349267ms: waiting for machine to come up
	I1004 02:14:00.611094  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:00.611652  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:00.611690  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:00.611559  181886 retry.go:31] will retry after 484.224219ms: waiting for machine to come up
	I1004 02:13:59.246009  179277 main.go:141] libmachine: (flannel-171116) Calling .GetIP
	I1004 02:13:59.249201  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:59.406606  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:13:59.406646  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:13:59.406975  179277 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1004 02:13:59.411735  179277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:13:59.423794  179277 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:13:59.423845  179277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:13:59.458867  179277 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 02:13:59.458930  179277 ssh_runner.go:195] Run: which lz4
	I1004 02:13:59.462714  179277 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 02:13:59.466721  179277 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:13:59.466746  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 02:14:01.514559  179277 crio.go:444] Took 2.051869 seconds to copy over tarball
	I1004 02:14:01.514635  179277 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:14:01.097462  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:01.098012  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:01.098044  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:01.097955  181886 retry.go:31] will retry after 401.004971ms: waiting for machine to come up
	I1004 02:14:01.500555  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:01.501158  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:01.501188  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:01.501098  181886 retry.go:31] will retry after 547.065058ms: waiting for machine to come up
	I1004 02:14:02.049923  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:02.051007  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:02.051036  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:02.050931  181886 retry.go:31] will retry after 825.852739ms: waiting for machine to come up
	I1004 02:14:02.879188  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:02.879755  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:02.879788  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:02.879682  181886 retry.go:31] will retry after 884.2385ms: waiting for machine to come up
	I1004 02:14:03.765604  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:03.766258  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:03.766290  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:03.766201  181886 retry.go:31] will retry after 1.485924487s: waiting for machine to come up
	I1004 02:14:05.253251  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:05.253819  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:05.253859  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:05.253759  181886 retry.go:31] will retry after 1.671033563s: waiting for machine to come up
	I1004 02:14:04.832646  179277 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.317975266s)
	I1004 02:14:04.832683  179277 crio.go:451] Took 3.318099 seconds to extract the tarball
	I1004 02:14:04.832702  179277 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:14:04.895099  179277 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:14:04.975324  179277 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 02:14:04.975348  179277 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:14:04.975407  179277 ssh_runner.go:195] Run: crio config
	I1004 02:14:05.031430  179277 cni.go:84] Creating CNI manager for "flannel"
	I1004 02:14:05.031474  179277 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 02:14:05.031503  179277 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.26 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-171116 NodeName:flannel-171116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.26"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.26 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:14:05.031676  179277 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.26
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-171116"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.26
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.26"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:14:05.031799  179277 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=flannel-171116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.26
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:flannel-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:}
	I1004 02:14:05.031870  179277 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 02:14:05.042111  179277 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:14:05.042252  179277 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:14:05.052019  179277 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1004 02:14:05.069543  179277 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:14:05.087164  179277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I1004 02:14:05.104735  179277 ssh_runner.go:195] Run: grep 192.168.50.26	control-plane.minikube.internal$ /etc/hosts
	I1004 02:14:05.108917  179277 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.26	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:14:05.121597  179277 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116 for IP: 192.168.50.26
	I1004 02:14:05.121652  179277 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.121810  179277 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 02:14:05.121870  179277 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 02:14:05.121928  179277 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/client.key
	I1004 02:14:05.121946  179277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/client.crt with IP's: []
	I1004 02:14:05.343027  179277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/client.crt ...
	I1004 02:14:05.343058  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/client.crt: {Name:mka004d8c7fef5863520325f74ccf2238daeb87b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.343246  179277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/client.key ...
	I1004 02:14:05.343266  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/client.key: {Name:mk021e2ee153399f41a0bbcca2035093847cbeb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.343373  179277 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.key.49030432
	I1004 02:14:05.343396  179277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.crt.49030432 with IP's: [192.168.50.26 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 02:14:05.421099  179277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.crt.49030432 ...
	I1004 02:14:05.421132  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.crt.49030432: {Name:mkac4a19924d7fed75d638ac9b1e007b36f034ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.421318  179277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.key.49030432 ...
	I1004 02:14:05.421338  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.key.49030432: {Name:mk987d8d198b4b9a330759f6f3a3ff76ea5aee8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.421436  179277 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.crt.49030432 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.crt
	I1004 02:14:05.421523  179277 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.key.49030432 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.key
	I1004 02:14:05.421594  179277 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.key
	I1004 02:14:05.421622  179277 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.crt with IP's: []
	I1004 02:14:05.621358  179277 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.crt ...
	I1004 02:14:05.621386  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.crt: {Name:mk165e274e19636b3d2e5948e8fcaa453a531d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.621577  179277 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.key ...
	I1004 02:14:05.621596  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.key: {Name:mkcf01625a98d96a64428b7543c2737452b76cdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:05.621809  179277 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 02:14:05.621872  179277 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 02:14:05.621891  179277 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 02:14:05.621924  179277 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 02:14:05.621962  179277 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:14:05.621996  179277 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 02:14:05.622049  179277 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:14:05.622600  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 02:14:05.650172  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:14:05.675799  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:14:05.701424  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/flannel-171116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 02:14:05.727391  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:14:05.751677  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:14:05.778518  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:14:05.804491  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:14:05.829267  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 02:14:05.854005  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:14:05.880063  179277 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 02:14:05.904013  179277 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:14:05.922287  179277 ssh_runner.go:195] Run: openssl version
	I1004 02:14:05.928056  179277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 02:14:05.942397  179277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 02:14:05.948544  179277 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 02:14:05.948601  179277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 02:14:05.954457  179277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 02:14:05.965997  179277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:14:05.978274  179277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:14:05.983568  179277 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:14:05.983650  179277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:14:05.991053  179277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:14:06.002572  179277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 02:14:06.013628  179277 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 02:14:06.019004  179277 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 02:14:06.019078  179277 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 02:14:06.025202  179277 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 02:14:06.036753  179277 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 02:14:06.041528  179277 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 02:14:06.041585  179277 kubeadm.go:404] StartCluster: {Name:flannel-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.2 ClusterName:flannel-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.26 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:14:06.041684  179277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:14:06.041728  179277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:14:06.084562  179277 cri.go:89] found id: ""
	I1004 02:14:06.084666  179277 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:14:06.094973  179277 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:14:06.105096  179277 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:14:06.114956  179277 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:14:06.115005  179277 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:14:06.168484  179277 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:14:06.168582  179277 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:14:06.320024  179277 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:14:06.320149  179277 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:14:06.320249  179277 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:14:06.581021  179277 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:14:06.642659  179277 out.go:204]   - Generating certificates and keys ...
	I1004 02:14:06.642776  179277 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:14:06.642867  179277 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:14:06.978370  179277 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:14:07.213683  179277 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:14:07.584640  179277 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:14:08.439108  179277 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 02:14:08.875028  179277 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 02:14:08.877235  179277 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [flannel-171116 localhost] and IPs [192.168.50.26 127.0.0.1 ::1]
	I1004 02:14:09.126788  179277 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 02:14:09.127033  179277 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [flannel-171116 localhost] and IPs [192.168.50.26 127.0.0.1 ::1]
	I1004 02:14:09.392376  179277 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:14:09.466882  179277 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:14:09.687814  179277 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 02:14:09.687975  179277 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:14:09.796374  179277 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:14:10.041320  179277 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:14:10.429292  179277 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:14:10.548660  179277 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:14:10.549643  179277 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:14:10.552417  179277 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:14:06.926531  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:06.927004  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:06.927043  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:06.926961  181886 retry.go:31] will retry after 1.990519246s: waiting for machine to come up
	I1004 02:14:08.919124  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:08.919601  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:08.919641  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:08.919536  181886 retry.go:31] will retry after 2.2283117s: waiting for machine to come up
	I1004 02:14:10.554703  179277 out.go:204]   - Booting up control plane ...
	I1004 02:14:10.554824  179277 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:14:10.554950  179277 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:14:10.555066  179277 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:14:10.573368  179277 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:14:10.575649  179277 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:14:10.575704  179277 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:14:10.699438  179277 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:14:11.149871  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:11.150463  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:11.150496  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:11.150405  181886 retry.go:31] will retry after 3.048823773s: waiting for machine to come up
	I1004 02:14:14.201853  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:14.202309  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:14.202335  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:14.202268  181886 retry.go:31] will retry after 4.36161728s: waiting for machine to come up
	I1004 02:14:18.699821  179277 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003138 seconds
	I1004 02:14:18.699988  179277 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:14:18.722722  179277 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:14:19.253635  179277 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:14:19.253956  179277 kubeadm.go:322] [mark-control-plane] Marking the node flannel-171116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:14:19.769356  179277 kubeadm.go:322] [bootstrap-token] Using token: 0ywokv.saqps8i9mssfazsl
	I1004 02:14:19.771192  179277 out.go:204]   - Configuring RBAC rules ...
	I1004 02:14:19.771320  179277 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:14:19.779163  179277 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:14:19.787789  179277 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:14:19.791543  179277 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:14:19.795661  179277 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:14:19.803158  179277 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:14:19.820945  179277 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:14:20.084330  179277 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:14:20.191217  179277 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:14:20.191243  179277 kubeadm.go:322] 
	I1004 02:14:20.191302  179277 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:14:20.191311  179277 kubeadm.go:322] 
	I1004 02:14:20.191385  179277 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:14:20.191432  179277 kubeadm.go:322] 
	I1004 02:14:20.191474  179277 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:14:20.191560  179277 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:14:20.191659  179277 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:14:20.191679  179277 kubeadm.go:322] 
	I1004 02:14:20.191742  179277 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:14:20.191751  179277 kubeadm.go:322] 
	I1004 02:14:20.191842  179277 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:14:20.191863  179277 kubeadm.go:322] 
	I1004 02:14:20.191941  179277 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:14:20.192036  179277 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:14:20.192138  179277 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:14:20.192149  179277 kubeadm.go:322] 
	I1004 02:14:20.192258  179277 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:14:20.192412  179277 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:14:20.192434  179277 kubeadm.go:322] 
	I1004 02:14:20.192564  179277 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0ywokv.saqps8i9mssfazsl \
	I1004 02:14:20.192694  179277 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:14:20.192740  179277 kubeadm.go:322] 	--control-plane 
	I1004 02:14:20.192755  179277 kubeadm.go:322] 
	I1004 02:14:20.192878  179277 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:14:20.192899  179277 kubeadm.go:322] 
	I1004 02:14:20.193009  179277 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0ywokv.saqps8i9mssfazsl \
	I1004 02:14:20.193135  179277 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:14:20.193298  179277 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:14:20.193324  179277 cni.go:84] Creating CNI manager for "flannel"
	I1004 02:14:20.195208  179277 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I1004 02:14:18.565339  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:18.565820  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find current IP address of domain bridge-171116 in network mk-bridge-171116
	I1004 02:14:18.565855  180785 main.go:141] libmachine: (bridge-171116) DBG | I1004 02:14:18.565790  181886 retry.go:31] will retry after 4.963441833s: waiting for machine to come up
	I1004 02:14:20.196833  179277 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1004 02:14:20.208618  179277 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1004 02:14:20.208644  179277 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4398 bytes)
	I1004 02:14:20.249259  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1004 02:14:21.442583  179277 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.193287323s)
	I1004 02:14:21.442637  179277 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:14:21.442748  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=flannel-171116 minikube.k8s.io/updated_at=2023_10_04T02_14_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:21.442757  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:21.582468  179277 ops.go:34] apiserver oom_adj: -16
	I1004 02:14:21.582522  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:23.530325  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.530839  180785 main.go:141] libmachine: (bridge-171116) Found IP for machine: 192.168.72.134
	I1004 02:14:23.530857  180785 main.go:141] libmachine: (bridge-171116) Reserving static IP address...
	I1004 02:14:23.530872  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has current primary IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.531267  180785 main.go:141] libmachine: (bridge-171116) DBG | unable to find host DHCP lease matching {name: "bridge-171116", mac: "52:54:00:c6:2e:ad", ip: "192.168.72.134"} in network mk-bridge-171116
	I1004 02:14:23.608915  180785 main.go:141] libmachine: (bridge-171116) DBG | Getting to WaitForSSH function...
	I1004 02:14:23.608960  180785 main.go:141] libmachine: (bridge-171116) Reserved static IP address: 192.168.72.134
	I1004 02:14:23.608975  180785 main.go:141] libmachine: (bridge-171116) Waiting for SSH to be available...
	I1004 02:14:23.612353  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.612774  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:23.612813  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.612975  180785 main.go:141] libmachine: (bridge-171116) DBG | Using SSH client type: external
	I1004 02:14:23.613007  180785 main.go:141] libmachine: (bridge-171116) DBG | Using SSH private key: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa (-rw-------)
	I1004 02:14:23.613048  180785 main.go:141] libmachine: (bridge-171116) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.134 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1004 02:14:23.613069  180785 main.go:141] libmachine: (bridge-171116) DBG | About to run SSH command:
	I1004 02:14:23.613124  180785 main.go:141] libmachine: (bridge-171116) DBG | exit 0
	I1004 02:14:23.709801  180785 main.go:141] libmachine: (bridge-171116) DBG | SSH cmd err, output: <nil>: 
	I1004 02:14:23.710160  180785 main.go:141] libmachine: (bridge-171116) KVM machine creation complete!
	I1004 02:14:23.710498  180785 main.go:141] libmachine: (bridge-171116) Calling .GetConfigRaw
	I1004 02:14:23.711076  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:23.711300  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:23.711479  180785 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1004 02:14:23.711500  180785 main.go:141] libmachine: (bridge-171116) Calling .GetState
	I1004 02:14:23.712759  180785 main.go:141] libmachine: Detecting operating system of created instance...
	I1004 02:14:23.712779  180785 main.go:141] libmachine: Waiting for SSH to be available...
	I1004 02:14:23.712789  180785 main.go:141] libmachine: Getting to WaitForSSH function...
	I1004 02:14:23.712799  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:23.715436  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.715851  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:23.715892  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.716028  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:23.716231  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:23.716438  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:23.716628  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:23.716800  180785 main.go:141] libmachine: Using SSH client type: native
	I1004 02:14:23.717394  180785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1004 02:14:23.717416  180785 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1004 02:14:23.845334  180785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:14:23.845368  180785 main.go:141] libmachine: Detecting the provisioner...
	I1004 02:14:23.845382  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:23.849031  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.849453  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:23.849484  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.849706  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:23.849947  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:23.850106  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:23.850269  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:23.850492  180785 main.go:141] libmachine: Using SSH client type: native
	I1004 02:14:23.850831  180785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1004 02:14:23.850847  180785 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1004 02:14:23.978923  180785 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-gb090841-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1004 02:14:23.979042  180785 main.go:141] libmachine: found compatible host: buildroot
	I1004 02:14:23.979059  180785 main.go:141] libmachine: Provisioning with buildroot...
	I1004 02:14:23.979072  180785 main.go:141] libmachine: (bridge-171116) Calling .GetMachineName
	I1004 02:14:23.979354  180785 buildroot.go:166] provisioning hostname "bridge-171116"
	I1004 02:14:23.979389  180785 main.go:141] libmachine: (bridge-171116) Calling .GetMachineName
	I1004 02:14:23.979581  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:23.982209  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.982564  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:23.982584  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:23.982753  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:23.982946  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:23.983119  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:23.983300  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:23.983479  180785 main.go:141] libmachine: Using SSH client type: native
	I1004 02:14:23.983810  180785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1004 02:14:23.983825  180785 main.go:141] libmachine: About to run SSH command:
	sudo hostname bridge-171116 && echo "bridge-171116" | sudo tee /etc/hostname
	I1004 02:14:24.123273  180785 main.go:141] libmachine: SSH cmd err, output: <nil>: bridge-171116
	
	I1004 02:14:24.123324  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:24.126209  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.126593  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.126629  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.126755  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:24.126924  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:24.127070  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:24.127168  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:24.127301  180785 main.go:141] libmachine: Using SSH client type: native
	I1004 02:14:24.127735  180785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1004 02:14:24.127763  180785 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sbridge-171116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 bridge-171116/g' /etc/hosts;
				else 
					echo '127.0.1.1 bridge-171116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1004 02:14:24.262670  180785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1004 02:14:24.262701  180785 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17348-128338/.minikube CaCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17348-128338/.minikube}
	I1004 02:14:24.262730  180785 buildroot.go:174] setting up certificates
	I1004 02:14:24.262745  180785 provision.go:83] configureAuth start
	I1004 02:14:24.262757  180785 main.go:141] libmachine: (bridge-171116) Calling .GetMachineName
	I1004 02:14:24.263039  180785 main.go:141] libmachine: (bridge-171116) Calling .GetIP
	I1004 02:14:24.266134  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.266553  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.266589  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.266777  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:24.269254  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.269772  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.269809  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.269947  180785 provision.go:138] copyHostCerts
	I1004 02:14:24.270011  180785 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem, removing ...
	I1004 02:14:24.270025  180785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem
	I1004 02:14:24.270091  180785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/cert.pem (1123 bytes)
	I1004 02:14:24.270211  180785 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem, removing ...
	I1004 02:14:24.270220  180785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem
	I1004 02:14:24.270255  180785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/key.pem (1675 bytes)
	I1004 02:14:24.270327  180785 exec_runner.go:144] found /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem, removing ...
	I1004 02:14:24.270337  180785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem
	I1004 02:14:24.270363  180785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17348-128338/.minikube/ca.pem (1078 bytes)
	I1004 02:14:24.270437  180785 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem org=jenkins.bridge-171116 san=[192.168.72.134 192.168.72.134 localhost 127.0.0.1 minikube bridge-171116]
	I1004 02:14:24.376536  180785 provision.go:172] copyRemoteCerts
	I1004 02:14:24.376599  180785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1004 02:14:24.376638  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:24.379568  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.379907  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.379955  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.380134  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:24.380322  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:24.380494  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:24.380653  180785 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa Username:docker}
	I1004 02:14:24.472644  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1004 02:14:24.496593  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1004 02:14:24.520496  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1004 02:14:24.543752  180785 provision.go:86] duration metric: configureAuth took 280.991962ms
	I1004 02:14:24.543802  180785 buildroot.go:189] setting minikube options for container-runtime
	I1004 02:14:24.543983  180785 config.go:182] Loaded profile config "bridge-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:14:24.544117  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:24.547342  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.547715  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.547752  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.547977  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:24.548226  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:24.548463  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:24.548696  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:24.548867  180785 main.go:141] libmachine: Using SSH client type: native
	I1004 02:14:24.549202  180785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1004 02:14:24.549226  180785 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1004 02:14:24.905021  180785 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1004 02:14:24.905100  180785 main.go:141] libmachine: Checking connection to Docker...
	I1004 02:14:24.905116  180785 main.go:141] libmachine: (bridge-171116) Calling .GetURL
	I1004 02:14:24.906602  180785 main.go:141] libmachine: (bridge-171116) DBG | Using libvirt version 6000000
	I1004 02:14:24.908621  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.908970  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.908998  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.909174  180785 main.go:141] libmachine: Docker is up and running!
	I1004 02:14:24.909189  180785 main.go:141] libmachine: Reticulating splines...
	I1004 02:14:24.909196  180785 client.go:171] LocalClient.Create took 27.297250399s
	I1004 02:14:24.909220  180785 start.go:167] duration metric: libmachine.API.Create for "bridge-171116" took 27.297327297s
	I1004 02:14:24.909232  180785 start.go:300] post-start starting for "bridge-171116" (driver="kvm2")
	I1004 02:14:24.909244  180785 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1004 02:14:24.909269  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:24.909526  180785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1004 02:14:24.909566  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:24.911508  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.911818  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:24.911851  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:24.911998  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:24.912205  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:24.912376  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:24.912488  180785 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa Username:docker}
	I1004 02:14:25.004932  180785 ssh_runner.go:195] Run: cat /etc/os-release
	I1004 02:14:25.010031  180785 info.go:137] Remote host: Buildroot 2021.02.12
	I1004 02:14:25.010060  180785 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/addons for local assets ...
	I1004 02:14:25.010143  180785 filesync.go:126] Scanning /home/jenkins/minikube-integration/17348-128338/.minikube/files for local assets ...
	I1004 02:14:25.010261  180785 filesync.go:149] local asset: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem -> 1355652.pem in /etc/ssl/certs
	I1004 02:14:25.010382  180785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1004 02:14:25.020514  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:14:25.044270  180785 start.go:303] post-start completed in 135.021496ms
	I1004 02:14:25.044325  180785 main.go:141] libmachine: (bridge-171116) Calling .GetConfigRaw
	I1004 02:14:25.044950  180785 main.go:141] libmachine: (bridge-171116) Calling .GetIP
	I1004 02:14:25.047815  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.048233  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:25.048266  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.048538  180785 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/config.json ...
	I1004 02:14:25.048817  180785 start.go:128] duration metric: createHost completed in 27.460432927s
	I1004 02:14:25.048852  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:25.051977  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.052421  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:25.052458  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.052689  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:25.052883  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:25.053090  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:25.053204  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:25.053372  180785 main.go:141] libmachine: Using SSH client type: native
	I1004 02:14:25.053751  180785 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f6d80] 0x7f9a60 <nil>  [] 0s} 192.168.72.134 22 <nil> <nil>}
	I1004 02:14:25.053766  180785 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1004 02:14:25.178927  180785 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696385665.161692819
	
	I1004 02:14:25.178953  180785 fix.go:206] guest clock: 1696385665.161692819
	I1004 02:14:25.178961  180785 fix.go:219] Guest: 2023-10-04 02:14:25.161692819 +0000 UTC Remote: 2023-10-04 02:14:25.048835004 +0000 UTC m=+39.264010838 (delta=112.857815ms)
	I1004 02:14:25.178978  180785 fix.go:190] guest clock delta is within tolerance: 112.857815ms
	I1004 02:14:25.178987  180785 start.go:83] releasing machines lock for "bridge-171116", held for 27.590789328s
	I1004 02:14:25.179013  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:25.179330  180785 main.go:141] libmachine: (bridge-171116) Calling .GetIP
	I1004 02:14:25.181930  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.182321  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:25.182355  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.182507  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:25.183054  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:25.183270  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:25.183370  180785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1004 02:14:25.183438  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:25.183462  180785 ssh_runner.go:195] Run: cat /version.json
	I1004 02:14:25.183484  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:25.186004  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.186356  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.186425  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:25.186470  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.186633  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:25.186716  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:25.186745  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:25.186856  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:25.186905  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHPort
	I1004 02:14:25.187035  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:25.187085  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHKeyPath
	I1004 02:14:25.187204  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHUsername
	I1004 02:14:25.187226  180785 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa Username:docker}
	I1004 02:14:25.187326  180785 sshutil.go:53] new ssh client: &{IP:192.168.72.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/bridge-171116/id_rsa Username:docker}
	I1004 02:14:25.302568  180785 ssh_runner.go:195] Run: systemctl --version
	I1004 02:14:25.308983  180785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1004 02:14:25.472618  180785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1004 02:14:25.479791  180785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1004 02:14:25.479867  180785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1004 02:14:25.495379  180785 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1004 02:14:25.495405  180785 start.go:469] detecting cgroup driver to use...
	I1004 02:14:25.495484  180785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1004 02:14:25.509990  180785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1004 02:14:25.522493  180785 docker.go:197] disabling cri-docker service (if available) ...
	I1004 02:14:25.522558  180785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1004 02:14:25.535349  180785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1004 02:14:25.548232  180785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1004 02:14:25.652769  180785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1004 02:14:25.788359  180785 docker.go:213] disabling docker service ...
	I1004 02:14:25.788438  180785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1004 02:14:25.803964  180785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1004 02:14:25.817140  180785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1004 02:14:25.929714  180785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1004 02:14:26.050119  180785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1004 02:14:26.064870  180785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1004 02:14:26.084091  180785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1004 02:14:26.084167  180785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:14:26.094754  180785 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1004 02:14:26.094832  180785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:14:26.105278  180785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:14:26.116307  180785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1004 02:14:26.127111  180785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1004 02:14:26.138296  180785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1004 02:14:26.147904  180785 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1004 02:14:26.147980  180785 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1004 02:14:26.161092  180785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1004 02:14:26.170870  180785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1004 02:14:26.287161  180785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1004 02:14:26.468103  180785 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I1004 02:14:26.468195  180785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1004 02:14:26.473077  180785 start.go:537] Will wait 60s for crictl version
	I1004 02:14:26.473159  180785 ssh_runner.go:195] Run: which crictl
	I1004 02:14:26.477113  180785 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1004 02:14:26.515813  180785 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1004 02:14:26.515916  180785 ssh_runner.go:195] Run: crio --version
	I1004 02:14:26.562966  180785 ssh_runner.go:195] Run: crio --version
	I1004 02:14:26.620787  180785 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.1 ...
	I1004 02:14:21.673801  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:22.262069  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:22.762086  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:23.261488  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:23.762460  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:24.262059  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:24.762212  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:25.261959  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:25.761504  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:26.261791  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:26.622208  180785 main.go:141] libmachine: (bridge-171116) Calling .GetIP
	I1004 02:14:26.625631  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:26.626030  180785 main.go:141] libmachine: (bridge-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:2e:ad", ip: ""} in network mk-bridge-171116: {Iface:virbr1 ExpiryTime:2023-10-04 03:14:15 +0000 UTC Type:0 Mac:52:54:00:c6:2e:ad Iaid: IPaddr:192.168.72.134 Prefix:24 Hostname:bridge-171116 Clientid:01:52:54:00:c6:2e:ad}
	I1004 02:14:26.626056  180785 main.go:141] libmachine: (bridge-171116) DBG | domain bridge-171116 has defined IP address 192.168.72.134 and MAC address 52:54:00:c6:2e:ad in network mk-bridge-171116
	I1004 02:14:26.626287  180785 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1004 02:14:26.630749  180785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:14:26.643211  180785 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 02:14:26.643269  180785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:14:26.678585  180785 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.2". assuming images are not preloaded.
	I1004 02:14:26.678665  180785 ssh_runner.go:195] Run: which lz4
	I1004 02:14:26.682470  180785 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1004 02:14:26.686303  180785 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1004 02:14:26.686331  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457151603 bytes)
	I1004 02:14:28.618527  180785 crio.go:444] Took 1.936093 seconds to copy over tarball
	I1004 02:14:28.618606  180785 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1004 02:14:26.762273  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:27.261570  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:27.761787  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:28.261718  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:28.762430  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:29.261619  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:29.762525  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:30.261693  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:30.762483  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:31.262066  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:31.762104  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:32.262199  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:32.892212  179277 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:33.343520  179277 kubeadm.go:1081] duration metric: took 11.9008275s to wait for elevateKubeSystemPrivileges.
	I1004 02:14:33.343563  179277 kubeadm.go:406] StartCluster complete in 27.301982639s
	I1004 02:14:33.343590  179277 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:33.343680  179277 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:14:33.345021  179277 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:33.375317  179277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:14:33.375635  179277 config.go:182] Loaded profile config "flannel-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:14:33.375809  179277 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:14:33.375891  179277 addons.go:69] Setting storage-provisioner=true in profile "flannel-171116"
	I1004 02:14:33.375914  179277 addons.go:231] Setting addon storage-provisioner=true in "flannel-171116"
	I1004 02:14:33.375976  179277 host.go:66] Checking if "flannel-171116" exists ...
	I1004 02:14:33.376481  179277 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:33.376520  179277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:33.376587  179277 addons.go:69] Setting default-storageclass=true in profile "flannel-171116"
	I1004 02:14:33.376613  179277 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-171116"
	I1004 02:14:33.376995  179277 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:33.377018  179277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:33.394472  179277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
	I1004 02:14:33.394521  179277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39983
	I1004 02:14:33.395000  179277 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:33.395228  179277 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:33.395653  179277 main.go:141] libmachine: Using API Version  1
	I1004 02:14:33.395672  179277 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:33.395755  179277 main.go:141] libmachine: Using API Version  1
	I1004 02:14:33.395780  179277 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:33.396188  179277 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:33.396230  179277 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:33.396387  179277 main.go:141] libmachine: (flannel-171116) Calling .GetState
	I1004 02:14:33.396872  179277 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:33.396912  179277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:33.399010  179277 addons.go:231] Setting addon default-storageclass=true in "flannel-171116"
	I1004 02:14:33.399042  179277 host.go:66] Checking if "flannel-171116" exists ...
	I1004 02:14:33.399395  179277 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:33.399436  179277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:33.413832  179277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37931
	I1004 02:14:33.414429  179277 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:33.414998  179277 main.go:141] libmachine: Using API Version  1
	I1004 02:14:33.415026  179277 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:33.415415  179277 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:33.415982  179277 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:33.416020  179277 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:33.416221  179277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35759
	I1004 02:14:33.416643  179277 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:33.417161  179277 main.go:141] libmachine: Using API Version  1
	I1004 02:14:33.417185  179277 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:33.417500  179277 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:33.417775  179277 main.go:141] libmachine: (flannel-171116) Calling .GetState
	I1004 02:14:33.419623  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:14:33.549186  179277 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:14:33.434881  179277 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I1004 02:14:33.629660  179277 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:14:33.629680  179277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:14:33.629712  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:14:33.630331  179277 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:33.631063  179277 main.go:141] libmachine: Using API Version  1
	I1004 02:14:33.631091  179277 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:33.631733  179277 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:33.631988  179277 main.go:141] libmachine: (flannel-171116) Calling .GetState
	I1004 02:14:33.633734  179277 main.go:141] libmachine: (flannel-171116) Calling .DriverName
	I1004 02:14:33.633893  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:14:33.634043  179277 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1004 02:14:33.634058  179277 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1004 02:14:33.634074  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHHostname
	I1004 02:14:33.634396  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:14:33.634426  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:14:33.634645  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:14:33.634864  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:14:33.635070  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:14:33.635258  179277 sshutil.go:53] new ssh client: &{IP:192.168.50.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa Username:docker}
	I1004 02:14:33.637287  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:14:33.637889  179277 main.go:141] libmachine: (flannel-171116) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:b1:18", ip: ""} in network mk-flannel-171116: {Iface:virbr4 ExpiryTime:2023-10-04 03:13:49 +0000 UTC Type:0 Mac:52:54:00:a2:b1:18 Iaid: IPaddr:192.168.50.26 Prefix:24 Hostname:flannel-171116 Clientid:01:52:54:00:a2:b1:18}
	I1004 02:14:33.637918  179277 main.go:141] libmachine: (flannel-171116) DBG | domain flannel-171116 has defined IP address 192.168.50.26 and MAC address 52:54:00:a2:b1:18 in network mk-flannel-171116
	I1004 02:14:33.638123  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHPort
	I1004 02:14:33.638305  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHKeyPath
	I1004 02:14:33.638475  179277 main.go:141] libmachine: (flannel-171116) Calling .GetSSHUsername
	I1004 02:14:33.638686  179277 sshutil.go:53] new ssh client: &{IP:192.168.50.26 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/flannel-171116/id_rsa Username:docker}
	I1004 02:14:33.746898  179277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1004 02:14:33.748203  179277 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:14:34.150260  179277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1004 02:14:34.724698  179277 main.go:141] libmachine: Making call to close driver server
	I1004 02:14:34.724729  179277 main.go:141] libmachine: (flannel-171116) Calling .Close
	I1004 02:14:34.725097  179277 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:14:34.725130  179277 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:14:34.725133  179277 main.go:141] libmachine: (flannel-171116) DBG | Closing plugin on server side
	I1004 02:14:34.725148  179277 main.go:141] libmachine: Making call to close driver server
	I1004 02:14:34.725160  179277 main.go:141] libmachine: (flannel-171116) Calling .Close
	I1004 02:14:34.725382  179277 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:14:34.725397  179277 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:14:34.748524  179277 kapi.go:248] "coredns" deployment in "kube-system" namespace and "flannel-171116" context rescaled to 1 replicas
	I1004 02:14:34.748570  179277 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.50.26 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:14:34.750296  179277 out.go:177] * Verifying Kubernetes components...
	I1004 02:14:34.752188  179277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 02:14:34.798958  179277 main.go:141] libmachine: Making call to close driver server
	I1004 02:14:34.798990  179277 main.go:141] libmachine: (flannel-171116) Calling .Close
	I1004 02:14:34.799329  179277 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:14:34.799355  179277 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:14:35.018048  179277 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1004 02:14:35.018055  179277 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.269809398s)
	I1004 02:14:35.018121  179277 main.go:141] libmachine: Making call to close driver server
	I1004 02:14:35.018139  179277 main.go:141] libmachine: (flannel-171116) Calling .Close
	I1004 02:14:35.018497  179277 main.go:141] libmachine: (flannel-171116) DBG | Closing plugin on server side
	I1004 02:14:35.018537  179277 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:14:35.018548  179277 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:14:35.018561  179277 main.go:141] libmachine: Making call to close driver server
	I1004 02:14:35.018574  179277 main.go:141] libmachine: (flannel-171116) Calling .Close
	I1004 02:14:35.018863  179277 main.go:141] libmachine: Successfully made call to close driver server
	I1004 02:14:35.018884  179277 main.go:141] libmachine: Making call to close connection to plugin binary
	I1004 02:14:35.018934  179277 main.go:141] libmachine: (flannel-171116) DBG | Closing plugin on server side
	I1004 02:14:35.020881  179277 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1004 02:14:31.705655  180785 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.087013561s)
	I1004 02:14:31.705701  180785 crio.go:451] Took 3.087149 seconds to extract the tarball
	I1004 02:14:31.705715  180785 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1004 02:14:31.751739  180785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1004 02:14:31.833053  180785 crio.go:496] all images are preloaded for cri-o runtime.
	I1004 02:14:31.833072  180785 cache_images.go:84] Images are preloaded, skipping loading
	I1004 02:14:31.833129  180785 ssh_runner.go:195] Run: crio config
	I1004 02:14:31.901684  180785 cni.go:84] Creating CNI manager for "bridge"
	I1004 02:14:31.901718  180785 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1004 02:14:31.901742  180785 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.134 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:bridge-171116 NodeName:bridge-171116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.134"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1004 02:14:31.901937  180785 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "bridge-171116"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.134"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1004 02:14:31.902024  180785 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=bridge-171116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:bridge-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I1004 02:14:31.902088  180785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1004 02:14:31.912662  180785 binaries.go:44] Found k8s binaries, skipping transfer
	I1004 02:14:31.912759  180785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1004 02:14:31.923605  180785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1004 02:14:31.941947  180785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1004 02:14:31.960041  180785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1004 02:14:31.976889  180785 ssh_runner.go:195] Run: grep 192.168.72.134	control-plane.minikube.internal$ /etc/hosts
	I1004 02:14:31.980727  180785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.134	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1004 02:14:31.993278  180785 certs.go:56] Setting up /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116 for IP: 192.168.72.134
	I1004 02:14:31.993320  180785 certs.go:190] acquiring lock for shared ca certs: {Name:mkf5f5022c56aa1972ba79418b6a256bc9cb0aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:31.993501  180785 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key
	I1004 02:14:31.993562  180785 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key
	I1004 02:14:31.993610  180785 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/client.key
	I1004 02:14:31.993622  180785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/client.crt with IP's: []
	I1004 02:14:32.429997  180785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/client.crt ...
	I1004 02:14:32.430027  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/client.crt: {Name:mk7b1cf9bd5117f0d1f6c3d76fc70d7e7dfd3c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:32.430244  180785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/client.key ...
	I1004 02:14:32.430264  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/client.key: {Name:mka22fcbc12cf4de39736c0f238aaa412b8d4a72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:32.430385  180785 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.key.fe588323
	I1004 02:14:32.430403  180785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.crt.fe588323 with IP's: [192.168.72.134 10.96.0.1 127.0.0.1 10.0.0.1]
	I1004 02:14:32.722662  180785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.crt.fe588323 ...
	I1004 02:14:32.722692  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.crt.fe588323: {Name:mk314006a79819f2d4f69721bf6a7305f854403e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:32.722861  180785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.key.fe588323 ...
	I1004 02:14:32.722872  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.key.fe588323: {Name:mkd6442f30b5373cc4ee62c16f35be886b66cf87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:32.722943  180785 certs.go:337] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.crt.fe588323 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.crt
	I1004 02:14:32.723007  180785 certs.go:341] copying /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.key.fe588323 -> /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.key
	I1004 02:14:32.723055  180785 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.key
	I1004 02:14:32.723067  180785 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.crt with IP's: []
	I1004 02:14:32.896460  180785 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.crt ...
	I1004 02:14:32.896491  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.crt: {Name:mk09e69396ae8e291c3aff85bfbd27849265c6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:32.896660  180785 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.key ...
	I1004 02:14:32.896675  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.key: {Name:mk9de5601405f3464b82ac11f7aa0e9846f34218 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:32.896858  180785 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem (1338 bytes)
	W1004 02:14:32.896909  180785 certs.go:433] ignoring /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565_empty.pem, impossibly tiny 0 bytes
	I1004 02:14:32.896926  180785 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca-key.pem (1679 bytes)
	I1004 02:14:32.896965  180785 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/ca.pem (1078 bytes)
	I1004 02:14:32.897002  180785 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/cert.pem (1123 bytes)
	I1004 02:14:32.897049  180785 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/certs/home/jenkins/minikube-integration/17348-128338/.minikube/certs/key.pem (1675 bytes)
	I1004 02:14:32.897109  180785 certs.go:437] found cert: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem (1708 bytes)
	I1004 02:14:32.897742  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1004 02:14:32.926134  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1004 02:14:32.952077  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1004 02:14:32.978594  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/bridge-171116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1004 02:14:33.008807  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1004 02:14:33.034154  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1004 02:14:33.057692  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1004 02:14:33.081319  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1004 02:14:33.105040  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1004 02:14:33.131037  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/certs/135565.pem --> /usr/share/ca-certificates/135565.pem (1338 bytes)
	I1004 02:14:33.156264  180785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/ssl/certs/1355652.pem --> /usr/share/ca-certificates/1355652.pem (1708 bytes)
	I1004 02:14:33.180223  180785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1004 02:14:33.199325  180785 ssh_runner.go:195] Run: openssl version
	I1004 02:14:33.205400  180785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1004 02:14:33.216396  180785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:14:33.221715  180785 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  4 00:44 /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:14:33.221797  180785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1004 02:14:33.227669  180785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1004 02:14:33.239518  180785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/135565.pem && ln -fs /usr/share/ca-certificates/135565.pem /etc/ssl/certs/135565.pem"
	I1004 02:14:33.251377  180785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/135565.pem
	I1004 02:14:33.256477  180785 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  4 00:52 /usr/share/ca-certificates/135565.pem
	I1004 02:14:33.256535  180785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/135565.pem
	I1004 02:14:33.263358  180785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/135565.pem /etc/ssl/certs/51391683.0"
	I1004 02:14:33.275758  180785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1355652.pem && ln -fs /usr/share/ca-certificates/1355652.pem /etc/ssl/certs/1355652.pem"
	I1004 02:14:33.288496  180785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1355652.pem
	I1004 02:14:33.293916  180785 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  4 00:52 /usr/share/ca-certificates/1355652.pem
	I1004 02:14:33.293975  180785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1355652.pem
	I1004 02:14:33.299973  180785 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1355652.pem /etc/ssl/certs/3ec20f2e.0"
	I1004 02:14:33.312708  180785 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1004 02:14:33.317750  180785 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1004 02:14:33.317816  180785 kubeadm.go:404] StartCluster: {Name:bridge-171116 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.2 ClusterName:bridge-171116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 02:14:33.317931  180785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1004 02:14:33.317992  180785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1004 02:14:33.361353  180785 cri.go:89] found id: ""
	I1004 02:14:33.361439  180785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1004 02:14:33.372026  180785 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1004 02:14:33.385908  180785 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1004 02:14:33.399969  180785 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1004 02:14:33.400016  180785 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1004 02:14:33.615553  180785 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1004 02:14:35.019368  179277 node_ready.go:35] waiting up to 15m0s for node "flannel-171116" to be "Ready" ...
	I1004 02:14:35.022415  179277 addons.go:502] enable addons completed in 1.646614674s: enabled=[default-storageclass storage-provisioner]
	I1004 02:14:37.047117  179277 node_ready.go:58] node "flannel-171116" has status "Ready":"False"
	I1004 02:14:39.545890  179277 node_ready.go:58] node "flannel-171116" has status "Ready":"False"
	I1004 02:14:41.547272  179277 node_ready.go:58] node "flannel-171116" has status "Ready":"False"
	I1004 02:14:46.406503  180785 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1004 02:14:46.406582  180785 kubeadm.go:322] [preflight] Running pre-flight checks
	I1004 02:14:46.406667  180785 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1004 02:14:46.406776  180785 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1004 02:14:46.406852  180785 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1004 02:14:46.406913  180785 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1004 02:14:46.408457  180785 out.go:204]   - Generating certificates and keys ...
	I1004 02:14:46.408552  180785 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1004 02:14:46.408643  180785 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1004 02:14:46.408763  180785 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1004 02:14:46.408845  180785 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1004 02:14:46.408918  180785 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1004 02:14:46.408968  180785 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1004 02:14:46.409013  180785 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1004 02:14:46.409129  180785 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [bridge-171116 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	I1004 02:14:46.409227  180785 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1004 02:14:46.409393  180785 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [bridge-171116 localhost] and IPs [192.168.72.134 127.0.0.1 ::1]
	I1004 02:14:46.409448  180785 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1004 02:14:46.409507  180785 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1004 02:14:46.409544  180785 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1004 02:14:46.409592  180785 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1004 02:14:46.409638  180785 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1004 02:14:46.409694  180785 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1004 02:14:46.409749  180785 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1004 02:14:46.409794  180785 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1004 02:14:46.409888  180785 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1004 02:14:46.409970  180785 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1004 02:14:46.412415  180785 out.go:204]   - Booting up control plane ...
	I1004 02:14:46.412539  180785 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1004 02:14:46.412636  180785 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1004 02:14:46.412720  180785 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1004 02:14:46.412860  180785 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1004 02:14:46.412979  180785 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1004 02:14:46.413050  180785 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1004 02:14:46.413228  180785 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1004 02:14:46.413306  180785 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502740 seconds
	I1004 02:14:46.413402  180785 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1004 02:14:46.413504  180785 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1004 02:14:46.413597  180785 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1004 02:14:46.413775  180785 kubeadm.go:322] [mark-control-plane] Marking the node bridge-171116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1004 02:14:46.413868  180785 kubeadm.go:322] [bootstrap-token] Using token: 9zhfbc.cm4w8c5hqhrpp73p
	I1004 02:14:46.415386  180785 out.go:204]   - Configuring RBAC rules ...
	I1004 02:14:46.415520  180785 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1004 02:14:46.415643  180785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1004 02:14:46.415801  180785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1004 02:14:46.415948  180785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1004 02:14:46.416130  180785 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1004 02:14:46.416235  180785 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1004 02:14:46.416396  180785 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1004 02:14:46.416448  180785 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1004 02:14:46.416499  180785 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1004 02:14:46.416504  180785 kubeadm.go:322] 
	I1004 02:14:46.416570  180785 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1004 02:14:46.416575  180785 kubeadm.go:322] 
	I1004 02:14:46.416656  180785 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1004 02:14:46.416662  180785 kubeadm.go:322] 
	I1004 02:14:46.416692  180785 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1004 02:14:46.416770  180785 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1004 02:14:46.416832  180785 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1004 02:14:46.416838  180785 kubeadm.go:322] 
	I1004 02:14:46.416911  180785 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1004 02:14:46.416917  180785 kubeadm.go:322] 
	I1004 02:14:46.416984  180785 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1004 02:14:46.416990  180785 kubeadm.go:322] 
	I1004 02:14:46.417049  180785 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1004 02:14:46.417139  180785 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1004 02:14:46.417221  180785 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1004 02:14:46.417228  180785 kubeadm.go:322] 
	I1004 02:14:46.417326  180785 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1004 02:14:46.417415  180785 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1004 02:14:46.417421  180785 kubeadm.go:322] 
	I1004 02:14:46.417522  180785 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9zhfbc.cm4w8c5hqhrpp73p \
	I1004 02:14:46.417646  180785 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 \
	I1004 02:14:46.417674  180785 kubeadm.go:322] 	--control-plane 
	I1004 02:14:46.417679  180785 kubeadm.go:322] 
	I1004 02:14:46.417788  180785 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1004 02:14:46.417807  180785 kubeadm.go:322] 
	I1004 02:14:46.417943  180785 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9zhfbc.cm4w8c5hqhrpp73p \
	I1004 02:14:46.418108  180785 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:31cbe84f0e04bde1bc2bd9c94c879e35e84c2f1e90d6fde71b58f908ef9c4494 
	I1004 02:14:46.418128  180785 cni.go:84] Creating CNI manager for "bridge"
	I1004 02:14:46.420077  180785 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1004 02:14:44.046190  179277 node_ready.go:58] node "flannel-171116" has status "Ready":"False"
	I1004 02:14:44.549092  179277 node_ready.go:49] node "flannel-171116" has status "Ready":"True"
	I1004 02:14:44.549116  179277 node_ready.go:38] duration metric: took 9.52673799s waiting for node "flannel-171116" to be "Ready" ...
	I1004 02:14:44.549127  179277 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1004 02:14:44.570392  179277 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-g5bj6" in "kube-system" namespace to be "Ready" ...
	I1004 02:14:46.614288  179277 pod_ready.go:102] pod "coredns-5dd5756b68-g5bj6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:14:46.421584  180785 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1004 02:14:46.469478  180785 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1004 02:14:46.518877  180785 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1004 02:14:46.518964  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:46.518995  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1 minikube.k8s.io/name=bridge-171116 minikube.k8s.io/updated_at=2023_10_04T02_14_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:46.858563  180785 ops.go:34] apiserver oom_adj: -16
	I1004 02:14:46.858716  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:47.001417  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:47.605164  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:48.105257  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:48.604599  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:49.105327  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:49.605094  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:50.105177  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:50.605446  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:49.114566  179277 pod_ready.go:102] pod "coredns-5dd5756b68-g5bj6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:14:51.612430  179277 pod_ready.go:102] pod "coredns-5dd5756b68-g5bj6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:14:51.104976  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:51.605466  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:52.104590  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:52.605297  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:53.104950  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:53.604687  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:54.104570  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:54.605536  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:55.104794  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:55.605453  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:53.613114  179277 pod_ready.go:102] pod "coredns-5dd5756b68-g5bj6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:14:55.614361  179277 pod_ready.go:102] pod "coredns-5dd5756b68-g5bj6" in "kube-system" namespace has status "Ready":"False"
	I1004 02:14:56.105192  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:56.605045  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:57.105107  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:57.605493  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:58.105196  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:58.605184  180785 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1004 02:14:58.731357  180785 kubeadm.go:1081] duration metric: took 12.212460411s to wait for elevateKubeSystemPrivileges.
	I1004 02:14:58.731401  180785 kubeadm.go:406] StartCluster complete in 25.413590568s
	I1004 02:14:58.731427  180785 settings.go:142] acquiring lock: {Name:mk68d0451eaf7b41dd07a202e015ab26f495283c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:58.731520  180785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 02:14:58.732767  180785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17348-128338/kubeconfig: {Name:mk9b6c9bf764f7b017557125a9a65230c28310cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1004 02:14:58.733059  180785 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1004 02:14:58.733242  180785 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1004 02:14:58.733333  180785 addons.go:69] Setting storage-provisioner=true in profile "bridge-171116"
	I1004 02:14:58.733346  180785 config.go:182] Loaded profile config "bridge-171116": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 02:14:58.733353  180785 addons.go:231] Setting addon storage-provisioner=true in "bridge-171116"
	I1004 02:14:58.733352  180785 addons.go:69] Setting default-storageclass=true in profile "bridge-171116"
	I1004 02:14:58.733416  180785 host.go:66] Checking if "bridge-171116" exists ...
	I1004 02:14:58.733420  180785 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "bridge-171116"
	I1004 02:14:58.733916  180785 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:58.733949  180785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:58.734059  180785 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:58.734102  180785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:58.750117  180785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33709
	I1004 02:14:58.750729  180785 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:58.751320  180785 main.go:141] libmachine: Using API Version  1
	I1004 02:14:58.751351  180785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:58.751742  180785 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:58.752032  180785 main.go:141] libmachine: (bridge-171116) Calling .GetState
	I1004 02:14:58.754427  180785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33057
	I1004 02:14:58.754878  180785 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:58.755314  180785 main.go:141] libmachine: Using API Version  1
	I1004 02:14:58.755338  180785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:58.755516  180785 addons.go:231] Setting addon default-storageclass=true in "bridge-171116"
	I1004 02:14:58.755548  180785 host.go:66] Checking if "bridge-171116" exists ...
	I1004 02:14:58.755732  180785 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:58.755920  180785 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:58.755940  180785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:58.756200  180785 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:58.756250  180785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:58.773055  180785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44853
	I1004 02:14:58.773446  180785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44953
	I1004 02:14:58.773638  180785 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:58.773817  180785 main.go:141] libmachine: () Calling .GetVersion
	I1004 02:14:58.774280  180785 main.go:141] libmachine: Using API Version  1
	I1004 02:14:58.774307  180785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:58.774433  180785 main.go:141] libmachine: Using API Version  1
	I1004 02:14:58.774447  180785 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 02:14:58.774817  180785 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:58.774817  180785 main.go:141] libmachine: () Calling .GetMachineName
	I1004 02:14:58.775166  180785 main.go:141] libmachine: (bridge-171116) Calling .GetState
	I1004 02:14:58.775430  180785 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17348-128338/.minikube/bin/docker-machine-driver-kvm2
	I1004 02:14:58.775467  180785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 02:14:58.777210  180785 main.go:141] libmachine: (bridge-171116) Calling .DriverName
	I1004 02:14:58.778985  180785 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1004 02:14:58.780540  180785 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1004 02:14:58.780572  180785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1004 02:14:58.780593  180785 main.go:141] libmachine: (bridge-171116) Calling .GetSSHHostname
	I1004 02:14:58.782632  180785 kapi.go:248] "coredns" deployment in "kube-system" namespace and "bridge-171116" context rescaled to 1 replicas
	I1004 02:14:58.782665  180785 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.134 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1004 02:14:58.784305  180785 out.go:177] * Verifying Kubernetes components...
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Wed 2023-10-04 01:58:06 UTC, ends at Wed 2023-10-04 02:15:00 UTC. --
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.578081205Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385700578063058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=834a095e-5ccd-4fe9-aca1-e9d8bb2f715f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.579283084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a037a1c-fa30-40c0-ac33-d1f977576a5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.579332729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a037a1c-fa30-40c0-ac33-d1f977576a5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.579502294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a037a1c-fa30-40c0-ac33-d1f977576a5d name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.641110611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f72b0820-0288-4931-a285-f19da7523dd4 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.641229031Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f72b0820-0288-4931-a285-f19da7523dd4 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.642894783Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8ead3021-00c4-4175-aea8-afdd1f104f07 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.643408447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385700643393147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8ead3021-00c4-4175-aea8-afdd1f104f07 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.644497776Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7b712feb-dbcb-4fe2-b1e5-5f13d762ba2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.644595494Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7b712feb-dbcb-4fe2-b1e5-5f13d762ba2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.644839390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7b712feb-dbcb-4fe2-b1e5-5f13d762ba2b name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.698471500Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c4e901cf-abaf-41a0-a05c-b2f0e8214969 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.698588445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c4e901cf-abaf-41a0-a05c-b2f0e8214969 name=/runtime.v1.RuntimeService/Version
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.700453382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=17d1fb76-f616-4326-bf4e-fdcf55cc9955 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.701030157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385700701011711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=17d1fb76-f616-4326-bf4e-fdcf55cc9955 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.702027705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fbf5861b-dd48-459b-8629-c17f54369e5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.702188886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fbf5861b-dd48-459b-8629-c17f54369e5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.702481320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fbf5861b-dd48-459b-8629-c17f54369e5e name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.748569093Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d5f1fdf7-586c-4da7-a892-63c93191bf2a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.748697523Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d5f1fdf7-586c-4da7-a892-63c93191bf2a name=/runtime.v1.RuntimeService/Version
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.750966940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bf15b4fd-e440-4a73-a41f-7fe85fbca18d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.751850754Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1696385700751828535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125567,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=bf15b4fd-e440-4a73-a41f-7fe85fbca18d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.753232332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8290654-927a-4ccf-ac7a-b6d5157cfb36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.753338552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8290654-927a-4ccf-ac7a-b6d5157cfb36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 04 02:15:00 default-k8s-diff-port-239802 crio[710]: time="2023-10-04 02:15:00.753569957Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d,PodSandboxId:43c3765fb4461976d4c5ab358309364ce46d2c496e0fb11961654a40c1c94ff1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1696385014215472371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1341113-6631-4c74-9f66-89c883fc4e08,},Annotations:map[string]string{io.kubernetes.container.hash: e8650623,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469,PodSandboxId:86f412782b2de111326129774da6310d47b4cfb0a7300d1b384c8658228877d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1696385013264852077,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-gjn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ad413f-043e-443c-ad1c-83d04099b47d,},Annotations:map[string]string{io.kubernetes.container.hash: 848595c0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPor
t\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9,PodSandboxId:94996d05e0580893fa97a75cc30a75164476d21dc6641bc5eaf117523a472c82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded,State:CONTAINER_RUNNING,CreatedAt:1696385010684657963,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b5ltp,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a7299ef0-9666-4675-8397-7b3e58ac9605,},Annotations:map[string]string{io.kubernetes.container.hash: 20f607e9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41,PodSandboxId:d6951eb8f982077060ea669180da5547afdb659283e3d567afbc37ac8a946086,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1696384990267764088,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa50e8b9e7f3bb2f3
55b8ffb8ea3dc73,},Annotations:map[string]string{io.kubernetes.container.hash: 37fe93b2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279,PodSandboxId:5f9ec4325c8ebf1c1daa5e5fd431b2605f60d0a3f17bbcdb4fb8bd8c06ce0341,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab,State:CONTAINER_RUNNING,CreatedAt:1696384990070371011,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 79e5eac9d342f4843
c7d345089963cea,},Annotations:map[string]string{io.kubernetes.container.hash: 66541c94,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece,PodSandboxId:60c3ef856ec9e053b3d0b67fd920e1359977d4ebe1e8aa3fea0178de9eec4df0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4,State:CONTAINER_RUNNING,CreatedAt:1696384989880945629,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 1c67fb79e369aea59f56f5e869cc2f2f,},Annotations:map[string]string{io.kubernetes.container.hash: 3c14e2ce,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd,PodSandboxId:8a0cd1daa0fcefb664b32a4df53244dbf8e21006e55fdf03a55250a443e672a0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631,State:CONTAINER_RUNNING,CreatedAt:1696384989764993371,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-239802,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d44cb3edf3641db088208247d02c24b3,},Annotations:map[string]string{io.kubernetes.container.hash: dfb8f2fe,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8290654-927a-4ccf-ac7a-b6d5157cfb36 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e68832fdc1a10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 minutes ago      Running             storage-provisioner       0                   43c3765fb4461       storage-provisioner
	e79ebe90e174e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   11 minutes ago      Running             coredns                   0                   86f412782b2de       coredns-5dd5756b68-gjn6v
	4cfac8b575d61       c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0   11 minutes ago      Running             kube-proxy                0                   94996d05e0580       kube-proxy-b5ltp
	b11685bcb8d2c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   11 minutes ago      Running             etcd                      2                   d6951eb8f9820       etcd-default-k8s-diff-port-239802
	61f2aacf5ae30       7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8   11 minutes ago      Running             kube-scheduler            2                   5f9ec4325c8eb       kube-scheduler-default-k8s-diff-port-239802
	7eb2c7cdd906b       55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57   11 minutes ago      Running             kube-controller-manager   2                   60c3ef856ec9e       kube-controller-manager-default-k8s-diff-port-239802
	88b798cbf497b       cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce   11 minutes ago      Running             kube-apiserver            2                   8a0cd1daa0fce       kube-apiserver-default-k8s-diff-port-239802
	
	* 
	* ==> coredns [e79ebe90e174edb4d79563ce504e6d542910c896d290d9280b97bfac0fdb6469] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44913 - 9724 "HINFO IN 2172030799814606730.8629516611717317443. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.103130129s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-239802
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-239802
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=cacb4070dc820e9f8fe7f94a5c041e95e45c32b1
	                    minikube.k8s.io/name=default-k8s-diff-port-239802
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_04T02_03_17_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Oct 2023 02:03:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-239802
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Oct 2023 02:14:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Oct 2023 02:13:50 +0000   Wed, 04 Oct 2023 02:03:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Oct 2023 02:13:50 +0000   Wed, 04 Oct 2023 02:03:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Oct 2023 02:13:50 +0000   Wed, 04 Oct 2023 02:03:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Oct 2023 02:13:50 +0000   Wed, 04 Oct 2023 02:03:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.105
	  Hostname:    default-k8s-diff-port-239802
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e9ba040ccf943748a952ffe0b1f0c13
	  System UUID:                7e9ba040-ccf9-4374-8a95-2ffe0b1f0c13
	  Boot ID:                    faf6834d-b499-4d93-a0d5-ecbdb74af482
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-gjn6v                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     11m
	  kube-system                 etcd-default-k8s-diff-port-239802                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kube-apiserver-default-k8s-diff-port-239802             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-239802    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-b5ltp                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-default-k8s-diff-port-239802             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 metrics-server-57f55c9bc5-c5ww7                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 11m   kube-proxy       
	  Normal  Starting                 11m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m   kubelet          Node default-k8s-diff-port-239802 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m   kubelet          Node default-k8s-diff-port-239802 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m   kubelet          Node default-k8s-diff-port-239802 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             11m   kubelet          Node default-k8s-diff-port-239802 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  11m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                11m   kubelet          Node default-k8s-diff-port-239802 status is now: NodeReady
	  Normal  RegisteredNode           11m   node-controller  Node default-k8s-diff-port-239802 event: Registered Node default-k8s-diff-port-239802 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct 4 01:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074347] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct 4 01:58] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.493660] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.160386] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.530179] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.901548] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.124596] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.137925] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.098545] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.221220] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +17.220280] systemd-fstab-generator[909]: Ignoring "noauto" for root device
	[ +20.171343] kauditd_printk_skb: 29 callbacks suppressed
	[Oct 4 02:03] systemd-fstab-generator[3506]: Ignoring "noauto" for root device
	[  +9.293636] systemd-fstab-generator[3828]: Ignoring "noauto" for root device
	[ +14.403595] kauditd_printk_skb: 2 callbacks suppressed
	[Oct 4 02:12] hrtimer: interrupt took 1794197 ns
	
	* 
	* ==> etcd [b11685bcb8d2c69d3146a1e59890e1597141483589e4378906248f676cf51d41] <==
	* {"level":"info","ts":"2023-10-04T02:10:21.607324Z","caller":"traceutil/trace.go:171","msg":"trace[1312185474] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:789; }","duration":"397.801465ms","start":"2023-10-04T02:10:21.209491Z","end":"2023-10-04T02:10:21.607293Z","steps":["trace[1312185474] 'range keys from in-memory index tree'  (duration: 397.264244ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:10:21.607429Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-10-04T02:10:21.209478Z","time spent":"397.926923ms","remote":"127.0.0.1:60898","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"warn","ts":"2023-10-04T02:10:21.608041Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.376015ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108843034626627308 > lease_revoke:<id:39058af86cc216a7>","response":"size:28"}
	{"level":"warn","ts":"2023-10-04T02:10:21.930458Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.040536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:10:21.930564Z","caller":"traceutil/trace.go:171","msg":"trace[1078186917] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:789; }","duration":"162.162007ms","start":"2023-10-04T02:10:21.768385Z","end":"2023-10-04T02:10:21.930547Z","steps":["trace[1078186917] 'range keys from in-memory index tree'  (duration: 161.949243ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:10:49.304099Z","caller":"traceutil/trace.go:171","msg":"trace[706154042] transaction","detail":"{read_only:false; response_revision:812; number_of_response:1; }","duration":"107.639501ms","start":"2023-10-04T02:10:49.196442Z","end":"2023-10-04T02:10:49.304082Z","steps":["trace[706154042] 'process raft request'  (duration: 107.265393ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:10:51.506223Z","caller":"traceutil/trace.go:171","msg":"trace[60450801] transaction","detail":"{read_only:false; response_revision:813; number_of_response:1; }","duration":"192.162285ms","start":"2023-10-04T02:10:51.314046Z","end":"2023-10-04T02:10:51.506208Z","steps":["trace[60450801] 'process raft request'  (duration: 191.897305ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:11:54.255489Z","caller":"traceutil/trace.go:171","msg":"trace[299668416] transaction","detail":"{read_only:false; response_revision:862; number_of_response:1; }","duration":"262.75889ms","start":"2023-10-04T02:11:53.992685Z","end":"2023-10-04T02:11:54.255443Z","steps":["trace[299668416] 'process raft request'  (duration: 262.329094ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:12:16.091555Z","caller":"traceutil/trace.go:171","msg":"trace[699265712] transaction","detail":"{read_only:false; response_revision:880; number_of_response:1; }","duration":"125.879874ms","start":"2023-10-04T02:12:15.965649Z","end":"2023-10-04T02:12:16.091529Z","steps":["trace[699265712] 'process raft request'  (duration: 63.97785ms)","trace[699265712] 'compare'  (duration: 61.73589ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-04T02:12:16.411507Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.14344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:12:16.411625Z","caller":"traceutil/trace.go:171","msg":"trace[758708642] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:880; }","duration":"204.349185ms","start":"2023-10-04T02:12:16.207249Z","end":"2023-10-04T02:12:16.411598Z","steps":["trace[758708642] 'range keys from in-memory index tree'  (duration: 204.017345ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:12:16.606884Z","caller":"traceutil/trace.go:171","msg":"trace[1300432100] transaction","detail":"{read_only:false; response_revision:881; number_of_response:1; }","duration":"188.250508ms","start":"2023-10-04T02:12:16.418612Z","end":"2023-10-04T02:12:16.606863Z","steps":["trace[1300432100] 'process raft request'  (duration: 188.02898ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:13:12.48293Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2023-10-04T02:13:12.485771Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":683,"took":"2.412516ms","hash":2286581802}
	{"level":"info","ts":"2023-10-04T02:13:12.485848Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2286581802,"revision":683,"compact-revision":-1}
	{"level":"warn","ts":"2023-10-04T02:14:06.240801Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.438095ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4108843034626628405 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.105\" mod_revision:963 > success:<request_put:<key:\"/registry/masterleases/192.168.61.105\" value_size:67 lease:4108843034626628403 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.105\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-10-04T02:14:06.241954Z","caller":"traceutil/trace.go:171","msg":"trace[463355970] linearizableReadLoop","detail":"{readStateIndex:1114; appliedIndex:1113; }","duration":"215.393628ms","start":"2023-10-04T02:14:06.026509Z","end":"2023-10-04T02:14:06.241903Z","steps":["trace[463355970] 'read index received'  (duration: 76.819376ms)","trace[463355970] 'applied index is now lower than readState.Index'  (duration: 138.572535ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-04T02:14:06.24197Z","caller":"traceutil/trace.go:171","msg":"trace[2144567773] transaction","detail":"{read_only:false; response_revision:971; number_of_response:1; }","duration":"268.287708ms","start":"2023-10-04T02:14:05.973659Z","end":"2023-10-04T02:14:06.241947Z","steps":["trace[2144567773] 'process raft request'  (duration: 129.710644ms)","trace[2144567773] 'compare'  (duration: 136.203143ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-04T02:14:06.242178Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.655413ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-10-04T02:14:06.242602Z","caller":"traceutil/trace.go:171","msg":"trace[756003895] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:971; }","duration":"216.094363ms","start":"2023-10-04T02:14:06.026449Z","end":"2023-10-04T02:14:06.242543Z","steps":["trace[756003895] 'agreement among raft nodes before linearized reading'  (duration: 215.555109ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-04T02:14:32.796421Z","caller":"traceutil/trace.go:171","msg":"trace[1405135331] transaction","detail":"{read_only:false; response_revision:992; number_of_response:1; }","duration":"105.362408ms","start":"2023-10-04T02:14:32.691028Z","end":"2023-10-04T02:14:32.796391Z","steps":["trace[1405135331] 'process raft request'  (duration: 104.884134ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:14:33.882649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.24258ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:14:33.882762Z","caller":"traceutil/trace.go:171","msg":"trace[612881456] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:993; }","duration":"139.366269ms","start":"2023-10-04T02:14:33.74337Z","end":"2023-10-04T02:14:33.882736Z","steps":["trace[612881456] 'range keys from in-memory index tree'  (duration: 139.157308ms)"],"step_count":1}
	{"level":"warn","ts":"2023-10-04T02:14:33.882889Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.886573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-04T02:14:33.883054Z","caller":"traceutil/trace.go:171","msg":"trace[186571774] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:993; }","duration":"116.059698ms","start":"2023-10-04T02:14:33.766979Z","end":"2023-10-04T02:14:33.883038Z","steps":["trace[186571774] 'range keys from in-memory index tree'  (duration: 115.808425ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  02:15:01 up 17 min,  0 users,  load average: 0.16, 0.25, 0.21
	Linux default-k8s-diff-port-239802 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [88b798cbf497b727f774ecf156fb234e5e8f5c311799bb54b61b3da25fca2bcd] <==
	* I1004 02:11:15.056864       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1004 02:11:15.056788       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:11:15.058601       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:12:13.949011       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1004 02:13:13.952378       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:13:14.062278       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:13:14.062526       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:13:14.063192       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:13:15.063014       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:13:15.063077       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:13:15.063089       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:13:15.063244       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:13:15.063382       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:13:15.064936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1004 02:14:13.948906       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1004 02:14:15.063622       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:14:15.063701       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1004 02:14:15.063723       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1004 02:14:15.066105       1 handler_proxy.go:93] no RequestInfo found in the context
	E1004 02:14:15.066303       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1004 02:14:15.066348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7eb2c7cdd906b8b8bf856e6903b61fe98d03b6fc7800193eb9c22bf8d4c24ece] <==
	* I1004 02:09:53.699844       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.282µs"
	E1004 02:09:59.272358       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:09:59.777036       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:10:29.278661       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:10:29.787504       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:10:59.286229       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:10:59.797546       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:11:29.296748       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:11:29.812893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:11:59.305690       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:11:59.831749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:12:29.314720       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:12:29.847512       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:12:59.323210       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:12:59.858858       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:13:29.330980       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:13:29.870312       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:13:59.337566       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:13:59.880269       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1004 02:14:29.345019       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:14:29.892016       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1004 02:14:45.687388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="151.042µs"
	I1004 02:14:56.693268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="286.809µs"
	E1004 02:14:59.353583       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1004 02:14:59.903824       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [4cfac8b575d6127d6ee3937a928f6181920c7647d150758787cb79b68b44e2a9] <==
	* I1004 02:03:31.371444       1 server_others.go:69] "Using iptables proxy"
	I1004 02:03:31.408306       1 node.go:141] Successfully retrieved node IP: 192.168.61.105
	I1004 02:03:31.486983       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1004 02:03:31.487087       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1004 02:03:31.491388       1 server_others.go:152] "Using iptables Proxier"
	I1004 02:03:31.491506       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1004 02:03:31.491762       1 server.go:846] "Version info" version="v1.28.2"
	I1004 02:03:31.491989       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1004 02:03:31.492933       1 config.go:188] "Starting service config controller"
	I1004 02:03:31.492982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1004 02:03:31.493030       1 config.go:97] "Starting endpoint slice config controller"
	I1004 02:03:31.493049       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1004 02:03:31.493651       1 config.go:315] "Starting node config controller"
	I1004 02:03:31.493697       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1004 02:03:31.594105       1 shared_informer.go:318] Caches are synced for node config
	I1004 02:03:31.594301       1 shared_informer.go:318] Caches are synced for service config
	I1004 02:03:31.594320       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [61f2aacf5ae30aa7ef5c974cab1e91eda80eb18051ab0cd742f7e18b5c269279] <==
	* W1004 02:03:14.132885       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1004 02:03:14.132918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1004 02:03:14.994915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1004 02:03:14.994976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1004 02:03:15.071547       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1004 02:03:15.071675       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1004 02:03:15.181003       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1004 02:03:15.181112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1004 02:03:15.249361       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1004 02:03:15.249464       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1004 02:03:15.368338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1004 02:03:15.368473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1004 02:03:15.407176       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1004 02:03:15.407251       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1004 02:03:15.413005       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1004 02:03:15.413062       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1004 02:03:15.413187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1004 02:03:15.413199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1004 02:03:15.419407       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1004 02:03:15.419460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1004 02:03:15.420594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1004 02:03:15.420778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1004 02:03:15.479519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1004 02:03:15.479609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1004 02:03:17.807023       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-04 01:58:06 UTC, ends at Wed 2023-10-04 02:15:01 UTC. --
	Oct 04 02:12:22 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:22.669884    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:12:33 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:33.670389    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:12:47 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:47.669967    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:12:58 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:12:58.670415    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:13:12 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:13:12.670685    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:13:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:13:17.705814    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:13:17 default-k8s-diff-port-239802 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:13:17 default-k8s-diff-port-239802 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:13:17 default-k8s-diff-port-239802 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:13:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:13:17.858670    3835 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Oct 04 02:13:23 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:13:23.670455    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:13:35 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:13:35.671778    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:13:50 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:13:50.669966    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:14:01 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:01.669546    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:14:16 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:16.669281    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:14:17 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:17.699290    3835 iptables.go:575] "Could not set up iptables canary" err=<
	Oct 04 02:14:17 default-k8s-diff-port-239802 kubelet[3835]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 04 02:14:17 default-k8s-diff-port-239802 kubelet[3835]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 04 02:14:17 default-k8s-diff-port-239802 kubelet[3835]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 04 02:14:30 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:30.700443    3835 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 04 02:14:30 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:30.700493    3835 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 04 02:14:30 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:30.700711    3835 kuberuntime_manager.go:1256] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m8wvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-c5ww7_kube-system(94967866-d714-41ed-8ee2-6c7eb8db836e): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Oct 04 02:14:30 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:30.700757    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:14:45 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:45.670424    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	Oct 04 02:14:56 default-k8s-diff-port-239802 kubelet[3835]: E1004 02:14:56.670326    3835 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-c5ww7" podUID="94967866-d714-41ed-8ee2-6c7eb8db836e"
	
	* 
	* ==> storage-provisioner [e68832fdc1a10ad315eff8fe414e28edf48be8254b6d9beb5b11ab59752a170d] <==
	* I1004 02:03:34.376497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1004 02:03:34.395267       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1004 02:03:34.396056       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1004 02:03:34.412994       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1004 02:03:34.415385       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-239802_4e4bcb63-1c90-4595-8d45-2dd5c1bb13c2!
	I1004 02:03:34.418485       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0bdfec96-db8d-49ca-ab54-6e7d9d62c081", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-239802_4e4bcb63-1c90-4595-8d45-2dd5c1bb13c2 became leader
	I1004 02:03:34.516235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-239802_4e4bcb63-1c90-4595-8d45-2dd5c1bb13c2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-c5ww7
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 describe pod metrics-server-57f55c9bc5-c5ww7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-239802 describe pod metrics-server-57f55c9bc5-c5ww7: exit status 1 (79.719194ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-c5ww7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-239802 describe pod metrics-server-57f55c9bc5-c5ww7: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (143.32s)

                                                
                                    

Test pass (230/290)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.34
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.2/json-events 5.38
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.54
20 TestOffline 62.27
22 TestAddons/Setup 143.78
24 TestAddons/parallel/Registry 20.2
26 TestAddons/parallel/InspektorGadget 11.28
27 TestAddons/parallel/MetricsServer 6.21
28 TestAddons/parallel/HelmTiller 13.11
30 TestAddons/parallel/CSI 56.72
31 TestAddons/parallel/Headlamp 15.84
32 TestAddons/parallel/CloudSpanner 5.82
33 TestAddons/parallel/LocalPath 66.07
36 TestAddons/serial/GCPAuth/Namespaces 0.13
38 TestCertOptions 103.55
39 TestCertExpiration 295.63
41 TestForceSystemdFlag 64.37
42 TestForceSystemdEnv 92.46
44 TestKVMDriverInstallOrUpdate 3.15
48 TestErrorSpam/setup 48.8
49 TestErrorSpam/start 0.33
50 TestErrorSpam/status 0.78
51 TestErrorSpam/pause 1.59
52 TestErrorSpam/unpause 1.76
53 TestErrorSpam/stop 2.2
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 64.84
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 54.65
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.08
64 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
65 TestFunctional/serial/CacheCmd/cache/add_local 2.03
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
67 TestFunctional/serial/CacheCmd/cache/list 0.04
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
70 TestFunctional/serial/CacheCmd/cache/delete 0.08
71 TestFunctional/serial/MinikubeKubectlCmd 0.1
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
73 TestFunctional/serial/ExtraConfig 37.91
74 TestFunctional/serial/ComponentHealth 0.07
75 TestFunctional/serial/LogsCmd 1.55
76 TestFunctional/serial/LogsFileCmd 1.53
77 TestFunctional/serial/InvalidService 4.19
79 TestFunctional/parallel/ConfigCmd 0.3
80 TestFunctional/parallel/DashboardCmd 18.32
81 TestFunctional/parallel/DryRun 0.29
82 TestFunctional/parallel/InternationalLanguage 0.14
83 TestFunctional/parallel/StatusCmd 0.86
87 TestFunctional/parallel/ServiceCmdConnect 32.8
88 TestFunctional/parallel/AddonsCmd 0.11
89 TestFunctional/parallel/PersistentVolumeClaim 46.78
91 TestFunctional/parallel/SSHCmd 0.38
92 TestFunctional/parallel/CpCmd 0.89
93 TestFunctional/parallel/MySQL 34.21
94 TestFunctional/parallel/FileSync 0.21
95 TestFunctional/parallel/CertSync 1.5
99 TestFunctional/parallel/NodeLabels 0.08
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
103 TestFunctional/parallel/License 0.2
104 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
105 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
106 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
107 TestFunctional/parallel/Version/short 0.05
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
109 TestFunctional/parallel/Version/components 0.71
110 TestFunctional/parallel/ImageCommands/ImageBuild 3.62
111 TestFunctional/parallel/ImageCommands/Setup 1.32
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.48
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.81
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
122 TestFunctional/parallel/ProfileCmd/profile_list 0.26
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.59
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 4.55
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.78
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 6.05
136 TestFunctional/parallel/ServiceCmd/DeployApp 9.32
137 TestFunctional/parallel/MountCmd/any-port 8.84
138 TestFunctional/parallel/MountCmd/specific-port 1.52
139 TestFunctional/parallel/ServiceCmd/List 0.83
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
143 TestFunctional/parallel/ServiceCmd/Format 0.5
144 TestFunctional/parallel/ServiceCmd/URL 0.52
145 TestFunctional/delete_addon-resizer_images 0.07
146 TestFunctional/delete_my-image_image 0.02
147 TestFunctional/delete_minikube_cached_images 0.02
151 TestIngressAddonLegacy/StartLegacyK8sCluster 77.66
153 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.6
154 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.62
158 TestJSONOutput/start/Command 70.46
159 TestJSONOutput/start/Audit 0
161 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/pause/Command 0.7
165 TestJSONOutput/pause/Audit 0
167 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/unpause/Command 0.63
171 TestJSONOutput/unpause/Audit 0
173 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/stop/Command 7.09
177 TestJSONOutput/stop/Audit 0
179 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
181 TestErrorJSONOutput 0.18
186 TestMainNoArgs 0.04
187 TestMinikubeProfile 96.9
190 TestMountStart/serial/StartWithMountFirst 26.06
191 TestMountStart/serial/VerifyMountFirst 0.37
192 TestMountStart/serial/StartWithMountSecond 32.22
193 TestMountStart/serial/VerifyMountSecond 0.37
194 TestMountStart/serial/DeleteFirst 0.69
195 TestMountStart/serial/VerifyMountPostDelete 0.37
196 TestMountStart/serial/Stop 1.16
197 TestMountStart/serial/RestartStopped 23.89
198 TestMountStart/serial/VerifyMountPostStop 0.38
201 TestMultiNode/serial/FreshStart2Nodes 112.99
202 TestMultiNode/serial/DeployApp2Nodes 5.05
204 TestMultiNode/serial/AddNode 42.93
205 TestMultiNode/serial/ProfileList 0.2
206 TestMultiNode/serial/CopyFile 7.04
207 TestMultiNode/serial/StopNode 2.95
208 TestMultiNode/serial/StartAfterStop 31.2
210 TestMultiNode/serial/DeleteNode 1.78
212 TestMultiNode/serial/RestartMultiNode 443.31
213 TestMultiNode/serial/ValidateNameConflict 49.92
220 TestScheduledStopUnix 120.59
228 TestStoppedBinaryUpgrade/Setup 0.32
232 TestPause/serial/Start 81.51
241 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
242 TestNoKubernetes/serial/StartWithK8s 78.82
250 TestNetworkPlugins/group/false 2.92
255 TestNoKubernetes/serial/StartWithStopK8s 46.96
256 TestNoKubernetes/serial/Start 26.87
258 TestStartStop/group/old-k8s-version/serial/FirstStart 162.53
260 TestStartStop/group/no-preload/serial/FirstStart 123.8
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
262 TestNoKubernetes/serial/ProfileList 0.72
263 TestNoKubernetes/serial/Stop 1.24
264 TestNoKubernetes/serial/StartNoArgs 69.5
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
267 TestStartStop/group/embed-certs/serial/FirstStart 65.25
268 TestStartStop/group/no-preload/serial/DeployApp 9.53
269 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
271 TestStartStop/group/embed-certs/serial/DeployApp 10.47
272 TestStartStop/group/old-k8s-version/serial/DeployApp 9.47
273 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.12
275 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
278 TestStartStop/group/newest-cni/serial/FirstStart 60.47
279 TestStartStop/group/newest-cni/serial/DeployApp 0
280 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.53
281 TestStartStop/group/newest-cni/serial/Stop 11.1
282 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
283 TestStartStop/group/newest-cni/serial/SecondStart 49.56
285 TestStartStop/group/no-preload/serial/SecondStart 676.91
288 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
289 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
290 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
291 TestStartStop/group/newest-cni/serial/Pause 2.5
293 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 326.24
294 TestStartStop/group/embed-certs/serial/SecondStart 625.39
295 TestStartStop/group/old-k8s-version/serial/SecondStart 750.56
296 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.59
297 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 4.03
300 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 624.4
308 TestNetworkPlugins/group/auto/Start 77.47
309 TestNetworkPlugins/group/kindnet/Start 87.94
310 TestNetworkPlugins/group/auto/KubeletFlags 0.23
311 TestNetworkPlugins/group/auto/NetCatPod 12.44
312 TestNetworkPlugins/group/auto/DNS 0.26
313 TestNetworkPlugins/group/auto/Localhost 0.25
314 TestNetworkPlugins/group/auto/HairPin 0.2
315 TestNetworkPlugins/group/calico/Start 97.49
316 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
317 TestNetworkPlugins/group/kindnet/KubeletFlags 0.79
318 TestNetworkPlugins/group/kindnet/NetCatPod 11.42
319 TestNetworkPlugins/group/custom-flannel/Start 101.3
320 TestNetworkPlugins/group/kindnet/DNS 0.2
321 TestNetworkPlugins/group/kindnet/Localhost 0.17
322 TestNetworkPlugins/group/kindnet/HairPin 0.19
323 TestNetworkPlugins/group/enable-default-cni/Start 88.31
325 TestNetworkPlugins/group/calico/ControllerPod 5.04
326 TestNetworkPlugins/group/calico/KubeletFlags 0.21
327 TestNetworkPlugins/group/calico/NetCatPod 12.45
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.41
330 TestNetworkPlugins/group/calico/DNS 0.2
331 TestNetworkPlugins/group/calico/Localhost 0.18
332 TestNetworkPlugins/group/calico/HairPin 0.18
333 TestNetworkPlugins/group/custom-flannel/DNS 0.2
334 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
335 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
336 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.55
337 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.41
338 TestNetworkPlugins/group/flannel/Start 91.3
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
342 TestNetworkPlugins/group/bridge/Start 79.94
343 TestNetworkPlugins/group/flannel/ControllerPod 5.03
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
345 TestNetworkPlugins/group/bridge/NetCatPod 11.34
346 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
347 TestNetworkPlugins/group/flannel/NetCatPod 11.44
348 TestNetworkPlugins/group/bridge/DNS 0.18
349 TestNetworkPlugins/group/bridge/Localhost 0.15
350 TestNetworkPlugins/group/bridge/HairPin 0.15
351 TestNetworkPlugins/group/flannel/DNS 0.18
352 TestNetworkPlugins/group/flannel/Localhost 0.16
353 TestNetworkPlugins/group/flannel/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (7.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-054908 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-054908 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.342530007s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-054908
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-054908: exit status 85 (54.324056ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC |          |
	|         | -p download-only-054908        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 00:43:27
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 00:43:27.697372  135576 out.go:296] Setting OutFile to fd 1 ...
	I1004 00:43:27.697625  135576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:43:27.697634  135576 out.go:309] Setting ErrFile to fd 2...
	I1004 00:43:27.697639  135576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:43:27.697836  135576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	W1004 00:43:27.698007  135576 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17348-128338/.minikube/config/config.json: open /home/jenkins/minikube-integration/17348-128338/.minikube/config/config.json: no such file or directory
	I1004 00:43:27.698623  135576 out.go:303] Setting JSON to true
	I1004 00:43:27.699537  135576 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5159,"bootTime":1696375049,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 00:43:27.699605  135576 start.go:138] virtualization: kvm guest
	I1004 00:43:27.702157  135576 out.go:97] [download-only-054908] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 00:43:27.703927  135576 out.go:169] MINIKUBE_LOCATION=17348
	W1004 00:43:27.702345  135576 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball: no such file or directory
	I1004 00:43:27.702405  135576 notify.go:220] Checking for updates...
	I1004 00:43:27.707212  135576 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 00:43:27.708752  135576 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:43:27.710282  135576 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:43:27.711979  135576 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1004 00:43:27.715036  135576 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 00:43:27.715271  135576 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 00:43:27.835671  135576 out.go:97] Using the kvm2 driver based on user configuration
	I1004 00:43:27.835707  135576 start.go:298] selected driver: kvm2
	I1004 00:43:27.835715  135576 start.go:902] validating driver "kvm2" against <nil>
	I1004 00:43:27.836101  135576 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:43:27.836233  135576 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 00:43:27.852188  135576 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 00:43:27.852245  135576 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1004 00:43:27.852741  135576 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1004 00:43:27.852920  135576 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1004 00:43:27.852956  135576 cni.go:84] Creating CNI manager for ""
	I1004 00:43:27.852970  135576 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:43:27.852989  135576 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1004 00:43:27.853001  135576 start_flags.go:321] config:
	{Name:download-only-054908 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-054908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:43:27.853251  135576 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:43:27.855283  135576 out.go:97] Downloading VM boot image ...
	I1004 00:43:27.855320  135576 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1004 00:43:30.733214  135576 out.go:97] Starting control plane node download-only-054908 in cluster download-only-054908
	I1004 00:43:30.733244  135576 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1004 00:43:30.764532  135576 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1004 00:43:30.764627  135576 cache.go:57] Caching tarball of preloaded images
	I1004 00:43:30.764793  135576 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1004 00:43:30.766849  135576 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1004 00:43:30.766880  135576 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:43:30.795727  135576 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-054908"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (5.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-054908 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-054908 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.382890205s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (5.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-054908
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-054908: exit status 85 (54.771963ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC |          |
	|         | -p download-only-054908        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-054908 | jenkins | v1.31.2 | 04 Oct 23 00:43 UTC |          |
	|         | -p download-only-054908        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/04 00:43:35
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1004 00:43:35.097384  135624 out.go:296] Setting OutFile to fd 1 ...
	I1004 00:43:35.097641  135624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:43:35.097651  135624 out.go:309] Setting ErrFile to fd 2...
	I1004 00:43:35.097656  135624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:43:35.097876  135624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	W1004 00:43:35.098032  135624 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17348-128338/.minikube/config/config.json: open /home/jenkins/minikube-integration/17348-128338/.minikube/config/config.json: no such file or directory
	I1004 00:43:35.098506  135624 out.go:303] Setting JSON to true
	I1004 00:43:35.099327  135624 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5166,"bootTime":1696375049,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 00:43:35.099383  135624 start.go:138] virtualization: kvm guest
	I1004 00:43:35.101315  135624 out.go:97] [download-only-054908] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 00:43:35.102735  135624 out.go:169] MINIKUBE_LOCATION=17348
	I1004 00:43:35.101479  135624 notify.go:220] Checking for updates...
	I1004 00:43:35.105228  135624 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 00:43:35.106432  135624 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:43:35.107666  135624 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:43:35.109816  135624 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1004 00:43:35.112615  135624 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1004 00:43:35.113030  135624 config.go:182] Loaded profile config "download-only-054908": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1004 00:43:35.113103  135624 start.go:810] api.Load failed for download-only-054908: filestore "download-only-054908": Docker machine "download-only-054908" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1004 00:43:35.113176  135624 driver.go:373] Setting default libvirt URI to qemu:///system
	W1004 00:43:35.113201  135624 start.go:810] api.Load failed for download-only-054908: filestore "download-only-054908": Docker machine "download-only-054908" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1004 00:43:35.146119  135624 out.go:97] Using the kvm2 driver based on existing profile
	I1004 00:43:35.146142  135624 start.go:298] selected driver: kvm2
	I1004 00:43:35.146148  135624 start.go:902] validating driver "kvm2" against &{Name:download-only-054908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-054908 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:43:35.146519  135624 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:43:35.146597  135624 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17348-128338/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1004 00:43:35.162244  135624 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1004 00:43:35.162927  135624 cni.go:84] Creating CNI manager for ""
	I1004 00:43:35.162944  135624 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1004 00:43:35.162954  135624 start_flags.go:321] config:
	{Name:download-only-054908 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-054908 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:43:35.163081  135624 iso.go:125] acquiring lock: {Name:mk90c74d4685e48c5767bc137904c0bf79ef30d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1004 00:43:35.164810  135624 out.go:97] Starting control plane node download-only-054908 in cluster download-only-054908
	I1004 00:43:35.164821  135624 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 00:43:35.207209  135624 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 00:43:35.207241  135624 cache.go:57] Caching tarball of preloaded images
	I1004 00:43:35.207366  135624 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 00:43:35.209180  135624 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1004 00:43:35.209200  135624 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:43:35.238304  135624 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:63ef340a9dae90462e676325aa502af3 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4
	I1004 00:43:38.820062  135624 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:43:38.820155  135624 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17348-128338/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-amd64.tar.lz4 ...
	I1004 00:43:39.726688  135624 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1004 00:43:39.726817  135624 profile.go:148] Saving config to /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/download-only-054908/config.json ...
	I1004 00:43:39.727022  135624 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1004 00:43:39.727216  135624 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17348-128338/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-054908"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-054908
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-652416 --alsologtostderr --binary-mirror http://127.0.0.1:37489 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-652416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-652416
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (62.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-398840 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-398840 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.057351578s)
helpers_test.go:175: Cleaning up "offline-crio-398840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-398840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-398840: (1.207637055s)
--- PASS: TestOffline (62.27s)

                                                
                                    
x
+
TestAddons/Setup (143.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p addons-718830 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p addons-718830 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.77587312s)
--- PASS: TestAddons/Setup (143.78s)

                                                
                                    
x
+
TestAddons/parallel/Registry (20.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 28.065048ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7csqs" [198945ee-f053-4037-b249-cd1a85d4d6d8] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.045018115s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rtq86" [0437de54-484e-49ca-a275-2aac0f07bf3c] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.042980946s
addons_test.go:318: (dbg) Run:  kubectl --context addons-718830 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-718830 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-718830 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (9.186578896s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 ip
addons_test.go:366: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (20.20s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9k6jx" [23889d6d-a296-4d27-b231-ed8ad7162515] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.042695826s
addons_test.go:819: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-718830
addons_test.go:819: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-718830: (6.23914873s)
--- PASS: TestAddons/parallel/InspektorGadget (11.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 28.005401ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-t2klq" [1c2b0f0d-72fe-46dd-9216-23673a621653] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.05745999s
addons_test.go:393: (dbg) Run:  kubectl --context addons-718830 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:410: (dbg) Done: out/minikube-linux-amd64 -p addons-718830 addons disable metrics-server --alsologtostderr -v=1: (1.036334951s)
--- PASS: TestAddons/parallel/MetricsServer (6.21s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.11s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 27.794846ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-xh5qq" [dcd8e248-e9c0-40fc-8ceb-baaaa18c5b9e] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.045586056s
addons_test.go:451: (dbg) Run:  kubectl --context addons-718830 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-718830 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.304295684s)
addons_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.72s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 5.929907ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-718830 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/10/04 00:46:24 [DEBUG] GET http://192.168.39.89:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-718830 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2b44815d-a1b8-4c5b-99a8-6a0eac25dd9d] Pending
helpers_test.go:344: "task-pv-pod" [2b44815d-a1b8-4c5b-99a8-6a0eac25dd9d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2b44815d-a1b8-4c5b-99a8-6a0eac25dd9d] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.010783091s
addons_test.go:562: (dbg) Run:  kubectl --context addons-718830 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-718830 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-718830 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-718830 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-718830 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-718830 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-718830 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-718830 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0f3a2e00-7a88-480f-afa7-56e69e08c085] Pending
helpers_test.go:344: "task-pv-pod-restore" [0f3a2e00-7a88-480f-afa7-56e69e08c085] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0f3a2e00-7a88-480f-afa7-56e69e08c085] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.027326058s
addons_test.go:604: (dbg) Run:  kubectl --context addons-718830 delete pod task-pv-pod-restore
addons_test.go:604: (dbg) Done: kubectl --context addons-718830 delete pod task-pv-pod-restore: (1.387468543s)
addons_test.go:608: (dbg) Run:  kubectl --context addons-718830 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-718830 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-amd64 -p addons-718830 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.878599543s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.72s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-718830 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-718830 --alsologtostderr -v=1: (1.78874653s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-cwm9b" [173edb6c-8d90-4319-b4e0-8c6de3abb9ae] Pending
helpers_test.go:344: "headlamp-58b88cff49-cwm9b" [173edb6c-8d90-4319-b4e0-8c6de3abb9ae] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-cwm9b" [173edb6c-8d90-4319-b4e0-8c6de3abb9ae] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.054950431s
--- PASS: TestAddons/parallel/Headlamp (15.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-k8gzr" [62f81a77-05b0-4384-9054-a3c491912590] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009828891s
addons_test.go:838: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-718830
--- PASS: TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (66.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-718830 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-718830 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8349b888-41c7-4eaa-9248-4e2bdd3f7e3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8349b888-41c7-4eaa-9248-4e2bdd3f7e3c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8349b888-41c7-4eaa-9248-4e2bdd3f7e3c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.01189982s
addons_test.go:869: (dbg) Run:  kubectl --context addons-718830 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 ssh "cat /opt/local-path-provisioner/pvc-48c55315-2a94-4604-a9bc-b609ad992d89_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-718830 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-718830 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-718830 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-linux-amd64 -p addons-718830 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.95755633s)
--- PASS: TestAddons/parallel/LocalPath (66.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-718830 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-718830 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (103.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-703971 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1004 01:38:15.375585  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-703971 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m42.052495383s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-703971 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-703971 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-703971 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-703971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-703971
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-703971: (1.018323331s)
--- PASS: TestCertOptions (103.55s)

                                                
                                    
x
+
TestCertExpiration (295.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-528457 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-528457 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m17.60175607s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-528457 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-528457 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (37.046249776s)
helpers_test.go:175: Cleaning up "cert-expiration-528457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-528457
--- PASS: TestCertExpiration (295.63s)

                                                
                                    
x
+
TestForceSystemdFlag (64.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-127356 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-127356 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m3.169063679s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-127356 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-127356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-127356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-127356: (1.002499137s)
--- PASS: TestForceSystemdFlag (64.37s)

                                                
                                    
x
+
TestForceSystemdEnv (92.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-874915 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-874915 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m31.474037764s)
helpers_test.go:175: Cleaning up "force-systemd-env-874915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-874915
--- PASS: TestForceSystemdEnv (92.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.15s)

                                                
                                    
x
+
TestErrorSpam/setup (48.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-866155 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-866155 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-866155 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-866155 --driver=kvm2  --container-runtime=crio: (48.799902493s)
--- PASS: TestErrorSpam/setup (48.80s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 status
--- PASS: TestErrorSpam/status (0.78s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 stop: (2.074900566s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-866155 --log_dir /tmp/nospam-866155 stop
--- PASS: TestErrorSpam/stop (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17348-128338/.minikube/files/etc/test/nested/copy/135565/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (64.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-398727 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-398727 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m4.840330097s)
--- PASS: TestFunctional/serial/StartWithProxy (64.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (54.65s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-398727 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-398727 --alsologtostderr -v=8: (54.651879413s)
functional_test.go:659: soft start took 54.652737443s for "functional-398727" cluster.
--- PASS: TestFunctional/serial/SoftStart (54.65s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-398727 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 cache add registry.k8s.io/pause:3.1: (1.158114641s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 cache add registry.k8s.io/pause:3.3: (1.075405399s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 cache add registry.k8s.io/pause:latest: (1.10157864s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-398727 /tmp/TestFunctionalserialCacheCmdcacheadd_local1815177359/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cache add minikube-local-cache-test:functional-398727
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 cache add minikube-local-cache-test:functional-398727: (1.737128016s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cache delete minikube-local-cache-test:functional-398727
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-398727
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.658465ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 kubectl -- --context functional-398727 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-398727 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-398727 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-398727 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.91224422s)
functional_test.go:757: restart took 37.91243368s for "functional-398727" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-398727 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 logs: (1.545243258s)
--- PASS: TestFunctional/serial/LogsCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 logs --file /tmp/TestFunctionalserialLogsFileCmd2904102499/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 logs --file /tmp/TestFunctionalserialLogsFileCmd2904102499/001/logs.txt: (1.527144773s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-398727 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-398727
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-398727: exit status 115 (298.119124ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.25:31717 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-398727 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 config get cpus: exit status 14 (42.493418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 config get cpus: exit status 14 (47.709496ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (18.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-398727 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-398727 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 143765: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (18.32s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-398727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-398727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.620073ms)

                                                
                                                
-- stdout --
	* [functional-398727] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 00:56:19.787630  143403 out.go:296] Setting OutFile to fd 1 ...
	I1004 00:56:19.787785  143403 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:56:19.787799  143403 out.go:309] Setting ErrFile to fd 2...
	I1004 00:56:19.787806  143403 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:56:19.788041  143403 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 00:56:19.788576  143403 out.go:303] Setting JSON to false
	I1004 00:56:19.789717  143403 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5931,"bootTime":1696375049,"procs":262,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 00:56:19.789795  143403 start.go:138] virtualization: kvm guest
	I1004 00:56:19.792263  143403 out.go:177] * [functional-398727] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 00:56:19.794325  143403 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 00:56:19.795789  143403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 00:56:19.794352  143403 notify.go:220] Checking for updates...
	I1004 00:56:19.798722  143403 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:56:19.800169  143403 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:56:19.801753  143403 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 00:56:19.803090  143403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 00:56:19.805163  143403 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 00:56:19.805747  143403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:56:19.805823  143403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:56:19.826403  143403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34343
	I1004 00:56:19.827117  143403 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:56:19.827795  143403 main.go:141] libmachine: Using API Version  1
	I1004 00:56:19.827821  143403 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:56:19.828357  143403 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:56:19.828574  143403 main.go:141] libmachine: (functional-398727) Calling .DriverName
	I1004 00:56:19.828814  143403 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 00:56:19.829226  143403 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:56:19.829272  143403 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:56:19.851991  143403 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43693
	I1004 00:56:19.852418  143403 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:56:19.853070  143403 main.go:141] libmachine: Using API Version  1
	I1004 00:56:19.853097  143403 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:56:19.853561  143403 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:56:19.853757  143403 main.go:141] libmachine: (functional-398727) Calling .DriverName
	I1004 00:56:19.893630  143403 out.go:177] * Using the kvm2 driver based on existing profile
	I1004 00:56:19.894997  143403 start.go:298] selected driver: kvm2
	I1004 00:56:19.895012  143403 start.go:902] validating driver "kvm2" against &{Name:functional-398727 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-398727 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:56:19.895124  143403 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 00:56:19.897506  143403 out.go:177] 
	W1004 00:56:19.898798  143403 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1004 00:56:19.900082  143403 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-398727 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-398727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
E1004 00:56:07.755069  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-398727 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (137.672043ms)

                                                
                                                
-- stdout --
	* [functional-398727] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 00:56:07.771687  142453 out.go:296] Setting OutFile to fd 1 ...
	I1004 00:56:07.771833  142453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:56:07.771843  142453 out.go:309] Setting ErrFile to fd 2...
	I1004 00:56:07.771848  142453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 00:56:07.772131  142453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 00:56:07.772705  142453 out.go:303] Setting JSON to false
	I1004 00:56:07.773577  142453 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5919,"bootTime":1696375049,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 00:56:07.773644  142453 start.go:138] virtualization: kvm guest
	I1004 00:56:07.775901  142453 out.go:177] * [functional-398727] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1004 00:56:07.777550  142453 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 00:56:07.777581  142453 notify.go:220] Checking for updates...
	I1004 00:56:07.779003  142453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 00:56:07.780437  142453 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 00:56:07.781831  142453 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 00:56:07.783241  142453 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 00:56:07.784795  142453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 00:56:07.786905  142453 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 00:56:07.787546  142453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:56:07.787598  142453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:56:07.803327  142453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35131
	I1004 00:56:07.803793  142453 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:56:07.804306  142453 main.go:141] libmachine: Using API Version  1
	I1004 00:56:07.804328  142453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:56:07.804723  142453 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:56:07.804928  142453 main.go:141] libmachine: (functional-398727) Calling .DriverName
	I1004 00:56:07.805351  142453 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 00:56:07.805774  142453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 00:56:07.805813  142453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 00:56:07.820492  142453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37899
	I1004 00:56:07.820892  142453 main.go:141] libmachine: () Calling .GetVersion
	I1004 00:56:07.821394  142453 main.go:141] libmachine: Using API Version  1
	I1004 00:56:07.821434  142453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 00:56:07.821808  142453 main.go:141] libmachine: () Calling .GetMachineName
	I1004 00:56:07.822043  142453 main.go:141] libmachine: (functional-398727) Calling .DriverName
	I1004 00:56:07.855070  142453 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1004 00:56:07.856693  142453 start.go:298] selected driver: kvm2
	I1004 00:56:07.856715  142453 start.go:902] validating driver "kvm2" against &{Name:functional-398727 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-398727 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.25 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1004 00:56:07.856841  142453 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 00:56:07.859477  142453 out.go:177] 
	W1004 00:56:07.861525  142453 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1004 00:56:07.863051  142453 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-398727 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-398727 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-4h5jz" [df490e9d-bee0-4764-86fa-cc5479917fcc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-4h5jz" [df490e9d-bee0-4764-86fa-cc5479917fcc] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 32.053265664s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.25:31510
functional_test.go:1674: http://192.168.39.25:31510: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-4h5jz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.25:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.25:31510
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (32.80s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ce89bcea-b514-4c79-8270-7266fb9438e0] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014114541s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-398727 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-398727 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-398727 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-398727 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de965739-d1bc-49d4-b028-0bb9c8229e6e] Pending
helpers_test.go:344: "sp-pod" [de965739-d1bc-49d4-b028-0bb9c8229e6e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de965739-d1bc-49d4-b028-0bb9c8229e6e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 31.0138308s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-398727 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-398727 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-398727 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e42ba8c2-ac92-4d25-aad4-8db49ce96da4] Pending
helpers_test.go:344: "sp-pod" [e42ba8c2-ac92-4d25-aad4-8db49ce96da4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1004 00:56:15.436896  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [e42ba8c2-ac92-4d25-aad4-8db49ce96da4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.011515712s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-398727 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh -n functional-398727 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 cp functional-398727:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2080485210/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh -n functional-398727 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-398727 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-z975k" [b31a495f-5017-4e0f-a088-d58c4a54022d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-z975k" [b31a495f-5017-4e0f-a088-d58c4a54022d] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.1047738s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-398727 exec mysql-859648c796-z975k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-398727 exec mysql-859648c796-z975k -- mysql -ppassword -e "show databases;": exit status 1 (323.333698ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-398727 exec mysql-859648c796-z975k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-398727 exec mysql-859648c796-z975k -- mysql -ppassword -e "show databases;": exit status 1 (156.276579ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1004 00:56:05.195144  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.200781  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.211629  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.231915  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.272296  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.352724  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.513715  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:05.833951  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:56:06.474600  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-398727 exec mysql-859648c796-z975k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.21s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/135565/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /etc/test/nested/copy/135565/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/135565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /etc/ssl/certs/135565.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/135565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /usr/share/ca-certificates/135565.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1355652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /etc/ssl/certs/1355652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1355652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /usr/share/ca-certificates/1355652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-398727 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh "sudo systemctl is-active docker": exit status 1 (230.41559ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh "sudo systemctl is-active containerd": exit status 1 (228.19668ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-398727 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-398727
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-398727
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-398727 image ls --format short --alsologtostderr:
I1004 00:56:20.383674  143620 out.go:296] Setting OutFile to fd 1 ...
I1004 00:56:20.383823  143620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:20.383836  143620 out.go:309] Setting ErrFile to fd 2...
I1004 00:56:20.383843  143620 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:20.384120  143620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
I1004 00:56:20.384775  143620 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:20.384889  143620 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:20.385229  143620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:20.385279  143620 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:20.400387  143620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46631
I1004 00:56:20.400886  143620 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:20.401388  143620 main.go:141] libmachine: Using API Version  1
I1004 00:56:20.401407  143620 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:20.401855  143620 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:20.402077  143620 main.go:141] libmachine: (functional-398727) Calling .GetState
I1004 00:56:20.404155  143620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:20.404217  143620 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:20.418929  143620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36465
I1004 00:56:20.419369  143620 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:20.419838  143620 main.go:141] libmachine: Using API Version  1
I1004 00:56:20.419862  143620 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:20.420166  143620 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:20.420338  143620 main.go:141] libmachine: (functional-398727) Calling .DriverName
I1004 00:56:20.420509  143620 ssh_runner.go:195] Run: systemctl --version
I1004 00:56:20.420538  143620 main.go:141] libmachine: (functional-398727) Calling .GetSSHHostname
I1004 00:56:20.423515  143620 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:20.423944  143620 main.go:141] libmachine: (functional-398727) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:9f:df", ip: ""} in network mk-functional-398727: {Iface:virbr1 ExpiryTime:2023-10-04 01:52:56 +0000 UTC Type:0 Mac:52:54:00:6f:9f:df Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-398727 Clientid:01:52:54:00:6f:9f:df}
I1004 00:56:20.423968  143620 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined IP address 192.168.39.25 and MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:20.424159  143620 main.go:141] libmachine: (functional-398727) Calling .GetSSHPort
I1004 00:56:20.424345  143620 main.go:141] libmachine: (functional-398727) Calling .GetSSHKeyPath
I1004 00:56:20.424502  143620 main.go:141] libmachine: (functional-398727) Calling .GetSSHUsername
I1004 00:56:20.424660  143620 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/functional-398727/id_rsa Username:docker}
I1004 00:56:20.568952  143620 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 00:56:20.640450  143620 main.go:141] libmachine: Making call to close driver server
I1004 00:56:20.640474  143620 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:20.640803  143620 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:20.640822  143620 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 00:56:20.640837  143620 main.go:141] libmachine: Making call to close driver server
I1004 00:56:20.640846  143620 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:20.641096  143620 main.go:141] libmachine: (functional-398727) DBG | Closing plugin on server side
I1004 00:56:20.641128  143620 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:20.641137  143620 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-398727 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/minikube-local-cache-test     | functional-398727  | 2c5ba0a37ddee | 3.35kB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/nginx                 | alpine             | d571254277f6a | 44.4MB |
| gcr.io/google-containers/addon-resizer  | functional-398727  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.2            | c120fed2beb84 | 74.7MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | 61395b4c586da | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 7a5d9d67a13f6 | 61.5MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | cdcab12b2dd16 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 55f13c92defb1 | 123MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-398727 image ls --format table --alsologtostderr:
I1004 00:56:21.206678  143742 out.go:296] Setting OutFile to fd 1 ...
I1004 00:56:21.206902  143742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:21.206911  143742 out.go:309] Setting ErrFile to fd 2...
I1004 00:56:21.206915  143742 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:21.207066  143742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
I1004 00:56:21.207625  143742 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:21.207718  143742 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:21.208095  143742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:21.208138  143742 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:21.222955  143742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32997
I1004 00:56:21.223480  143742 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:21.224175  143742 main.go:141] libmachine: Using API Version  1
I1004 00:56:21.224210  143742 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:21.224572  143742 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:21.224771  143742 main.go:141] libmachine: (functional-398727) Calling .GetState
I1004 00:56:21.226770  143742 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:21.226820  143742 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:21.241911  143742 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37539
I1004 00:56:21.242407  143742 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:21.242908  143742 main.go:141] libmachine: Using API Version  1
I1004 00:56:21.242943  143742 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:21.243281  143742 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:21.243561  143742 main.go:141] libmachine: (functional-398727) Calling .DriverName
I1004 00:56:21.243782  143742 ssh_runner.go:195] Run: systemctl --version
I1004 00:56:21.243809  143742 main.go:141] libmachine: (functional-398727) Calling .GetSSHHostname
I1004 00:56:21.246966  143742 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:21.247431  143742 main.go:141] libmachine: (functional-398727) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:9f:df", ip: ""} in network mk-functional-398727: {Iface:virbr1 ExpiryTime:2023-10-04 01:52:56 +0000 UTC Type:0 Mac:52:54:00:6f:9f:df Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-398727 Clientid:01:52:54:00:6f:9f:df}
I1004 00:56:21.247471  143742 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined IP address 192.168.39.25 and MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:21.247666  143742 main.go:141] libmachine: (functional-398727) Calling .GetSSHPort
I1004 00:56:21.247838  143742 main.go:141] libmachine: (functional-398727) Calling .GetSSHKeyPath
I1004 00:56:21.248009  143742 main.go:141] libmachine: (functional-398727) Calling .GetSSHUsername
I1004 00:56:21.248165  143742 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/functional-398727/id_rsa Username:docker}
I1004 00:56:21.353016  143742 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 00:56:21.459428  143742 main.go:141] libmachine: Making call to close driver server
I1004 00:56:21.459447  143742 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:21.459779  143742 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:21.459801  143742 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 00:56:21.459811  143742 main.go:141] libmachine: Making call to close driver server
I1004 00:56:21.459822  143742 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:21.460038  143742 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:21.460057  143742 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-398727 image ls --format json --alsologtostderr:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"127149008"},{"id":"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":["registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f
4e487924195f60c09f284bbda38cab7cbe71a51fded","registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"74687895"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"61485878"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd06
1d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-398727"],"size":"34114467"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820094"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef12
0ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2c5ba0a37ddee24508b0f093fabef7a8eab9965eee4b05c8f769576ea5cb6665","repoDigests":["localhost/minikube-local-cache-test@sha256:331af01c831fa3a20da80fda6fe9a020c93b52df1884b0692056c32437dc50f4"],"repoTags":["localhost/minikube-local-cache-test:functional-398727"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531d
b8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"},{"id":"d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4","repoDigests":["docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14","docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44434729"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4","registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"123171638"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s
.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb
0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-398727 image ls --format json --alsologtostderr:
I1004 00:56:20.955489  143696 out.go:296] Setting OutFile to fd 1 ...
I1004 00:56:20.955653  143696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:20.955666  143696 out.go:309] Setting ErrFile to fd 2...
I1004 00:56:20.955677  143696 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:20.955859  143696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
I1004 00:56:20.956432  143696 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:20.956530  143696 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:20.956874  143696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:20.956916  143696 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:20.971990  143696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35903
I1004 00:56:20.972523  143696 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:20.973235  143696 main.go:141] libmachine: Using API Version  1
I1004 00:56:20.973268  143696 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:20.973641  143696 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:20.973822  143696 main.go:141] libmachine: (functional-398727) Calling .GetState
I1004 00:56:20.975712  143696 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:20.975761  143696 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:20.991370  143696 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41269
I1004 00:56:20.991809  143696 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:20.992338  143696 main.go:141] libmachine: Using API Version  1
I1004 00:56:20.992372  143696 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:20.992730  143696 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:20.992946  143696 main.go:141] libmachine: (functional-398727) Calling .DriverName
I1004 00:56:20.993135  143696 ssh_runner.go:195] Run: systemctl --version
I1004 00:56:20.993170  143696 main.go:141] libmachine: (functional-398727) Calling .GetSSHHostname
I1004 00:56:20.996191  143696 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:20.996731  143696 main.go:141] libmachine: (functional-398727) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:9f:df", ip: ""} in network mk-functional-398727: {Iface:virbr1 ExpiryTime:2023-10-04 01:52:56 +0000 UTC Type:0 Mac:52:54:00:6f:9f:df Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-398727 Clientid:01:52:54:00:6f:9f:df}
I1004 00:56:20.996812  143696 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined IP address 192.168.39.25 and MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:20.997079  143696 main.go:141] libmachine: (functional-398727) Calling .GetSSHPort
I1004 00:56:20.997251  143696 main.go:141] libmachine: (functional-398727) Calling .GetSSHKeyPath
I1004 00:56:20.997399  143696 main.go:141] libmachine: (functional-398727) Calling .GetSSHUsername
I1004 00:56:20.997509  143696 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/functional-398727/id_rsa Username:docker}
I1004 00:56:21.100978  143696 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 00:56:21.158106  143696 main.go:141] libmachine: Making call to close driver server
I1004 00:56:21.158124  143696 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:21.158445  143696 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:21.158466  143696 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 00:56:21.158476  143696 main.go:141] libmachine: (functional-398727) DBG | Closing plugin on server side
I1004 00:56:21.158481  143696 main.go:141] libmachine: Making call to close driver server
I1004 00:56:21.158504  143696 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:21.158751  143696 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:21.158778  143696 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-398727 image ls --format yaml --alsologtostderr:
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-398727
size: "34114467"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2c5ba0a37ddee24508b0f093fabef7a8eab9965eee4b05c8f769576ea5cb6665
repoDigests:
- localhost/minikube-local-cache-test@sha256:331af01c831fa3a20da80fda6fe9a020c93b52df1884b0692056c32437dc50f4
repoTags:
- localhost/minikube-local-cache-test:functional-398727
size: "3345"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
- registry.k8s.io/kube-controller-manager@sha256:757a9c9d2f2329799490f9cec6c8ea12dfe4b6225051f6436f39d22a1def682e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "123171638"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: d571254277f6a0ba9d0c4a08f29b94476dcd4a95275bd484ece060ee4ff847e4
repoDigests:
- docker.io/library/nginx@sha256:34b58b4f5c6d133d97298cbaae140283dc325ff1aeffb28176f63078baeffd14
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
repoTags:
- docker.io/library/nginx:alpine
size: "44434729"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests:
- registry.k8s.io/kube-proxy@sha256:3330e491169be46febd3f4e487924195f60c09f284bbda38cab7cbe71a51fded
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "74687895"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:b2888fc9cfe7cd9d6727aeb462d13c7c45dec413b66f2819a36c4a3cb9d4df75
repoTags:
- docker.io/library/nginx:latest
size: "190820094"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07ec0f29e172784b9fda870d63430a84befade590a2220c1fcce52f17cd24631
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "127149008"
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:8fc5b9b97128515266d5435273682ba36115d9ca1b68a5749e6f9a23927ef543
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "61485878"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-398727 image ls --format yaml --alsologtostderr:
I1004 00:56:20.688063  143644 out.go:296] Setting OutFile to fd 1 ...
I1004 00:56:20.688183  143644 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:20.688192  143644 out.go:309] Setting ErrFile to fd 2...
I1004 00:56:20.688196  143644 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:20.688376  143644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
I1004 00:56:20.688919  143644 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:20.689011  143644 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:20.689336  143644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:20.689379  143644 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:20.703839  143644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38895
I1004 00:56:20.704343  143644 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:20.704930  143644 main.go:141] libmachine: Using API Version  1
I1004 00:56:20.704957  143644 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:20.705334  143644 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:20.705537  143644 main.go:141] libmachine: (functional-398727) Calling .GetState
I1004 00:56:20.707674  143644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:20.707726  143644 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:20.722321  143644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38701
I1004 00:56:20.722798  143644 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:20.723243  143644 main.go:141] libmachine: Using API Version  1
I1004 00:56:20.723268  143644 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:20.723682  143644 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:20.723879  143644 main.go:141] libmachine: (functional-398727) Calling .DriverName
I1004 00:56:20.724104  143644 ssh_runner.go:195] Run: systemctl --version
I1004 00:56:20.724143  143644 main.go:141] libmachine: (functional-398727) Calling .GetSSHHostname
I1004 00:56:20.726986  143644 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:20.727431  143644 main.go:141] libmachine: (functional-398727) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:9f:df", ip: ""} in network mk-functional-398727: {Iface:virbr1 ExpiryTime:2023-10-04 01:52:56 +0000 UTC Type:0 Mac:52:54:00:6f:9f:df Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-398727 Clientid:01:52:54:00:6f:9f:df}
I1004 00:56:20.727472  143644 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined IP address 192.168.39.25 and MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:20.727543  143644 main.go:141] libmachine: (functional-398727) Calling .GetSSHPort
I1004 00:56:20.727732  143644 main.go:141] libmachine: (functional-398727) Calling .GetSSHKeyPath
I1004 00:56:20.727891  143644 main.go:141] libmachine: (functional-398727) Calling .GetSSHUsername
I1004 00:56:20.728028  143644 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/functional-398727/id_rsa Username:docker}
I1004 00:56:20.845438  143644 ssh_runner.go:195] Run: sudo crictl images --output json
I1004 00:56:20.910471  143644 main.go:141] libmachine: Making call to close driver server
I1004 00:56:20.910492  143644 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:20.910818  143644 main.go:141] libmachine: (functional-398727) DBG | Closing plugin on server side
I1004 00:56:20.910865  143644 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:20.910875  143644 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 00:56:20.910894  143644 main.go:141] libmachine: Making call to close driver server
I1004 00:56:20.910909  143644 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:20.911170  143644 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:20.911190  143644 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh pgrep buildkitd: exit status 1 (220.555722ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image build -t localhost/my-image:functional-398727 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image build -t localhost/my-image:functional-398727 testdata/build --alsologtostderr: (3.132686599s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-398727 image build -t localhost/my-image:functional-398727 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2557c038421
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-398727
--> e2dbb218116
Successfully tagged localhost/my-image:functional-398727
e2dbb2181164e1748b4235e5d9ee791c12a361ea8f135f169a3a228d287c9e07
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-398727 image build -t localhost/my-image:functional-398727 testdata/build --alsologtostderr:
I1004 00:56:21.016614  143713 out.go:296] Setting OutFile to fd 1 ...
I1004 00:56:21.016809  143713 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:21.016821  143713 out.go:309] Setting ErrFile to fd 2...
I1004 00:56:21.016829  143713 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1004 00:56:21.017153  143713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
I1004 00:56:21.017974  143713 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:21.018650  143713 config.go:182] Loaded profile config "functional-398727": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1004 00:56:21.019217  143713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:21.019295  143713 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:21.034774  143713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44671
I1004 00:56:21.035282  143713 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:21.035846  143713 main.go:141] libmachine: Using API Version  1
I1004 00:56:21.035867  143713 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:21.036297  143713 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:21.036483  143713 main.go:141] libmachine: (functional-398727) Calling .GetState
I1004 00:56:21.038511  143713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1004 00:56:21.038562  143713 main.go:141] libmachine: Launching plugin server for driver kvm2
I1004 00:56:21.053064  143713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
I1004 00:56:21.053535  143713 main.go:141] libmachine: () Calling .GetVersion
I1004 00:56:21.054048  143713 main.go:141] libmachine: Using API Version  1
I1004 00:56:21.054082  143713 main.go:141] libmachine: () Calling .SetConfigRaw
I1004 00:56:21.054431  143713 main.go:141] libmachine: () Calling .GetMachineName
I1004 00:56:21.054617  143713 main.go:141] libmachine: (functional-398727) Calling .DriverName
I1004 00:56:21.054809  143713 ssh_runner.go:195] Run: systemctl --version
I1004 00:56:21.054852  143713 main.go:141] libmachine: (functional-398727) Calling .GetSSHHostname
I1004 00:56:21.057774  143713 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:21.058230  143713 main.go:141] libmachine: (functional-398727) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6f:9f:df", ip: ""} in network mk-functional-398727: {Iface:virbr1 ExpiryTime:2023-10-04 01:52:56 +0000 UTC Type:0 Mac:52:54:00:6f:9f:df Iaid: IPaddr:192.168.39.25 Prefix:24 Hostname:functional-398727 Clientid:01:52:54:00:6f:9f:df}
I1004 00:56:21.058270  143713 main.go:141] libmachine: (functional-398727) DBG | domain functional-398727 has defined IP address 192.168.39.25 and MAC address 52:54:00:6f:9f:df in network mk-functional-398727
I1004 00:56:21.058409  143713 main.go:141] libmachine: (functional-398727) Calling .GetSSHPort
I1004 00:56:21.058594  143713 main.go:141] libmachine: (functional-398727) Calling .GetSSHKeyPath
I1004 00:56:21.058768  143713 main.go:141] libmachine: (functional-398727) Calling .GetSSHUsername
I1004 00:56:21.058911  143713 sshutil.go:53] new ssh client: &{IP:192.168.39.25 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/functional-398727/id_rsa Username:docker}
I1004 00:56:21.168729  143713 build_images.go:151] Building image from path: /tmp/build.93282728.tar
I1004 00:56:21.168793  143713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1004 00:56:21.184730  143713 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.93282728.tar
I1004 00:56:21.191502  143713 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.93282728.tar: stat -c "%s %y" /var/lib/minikube/build/build.93282728.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.93282728.tar': No such file or directory
I1004 00:56:21.191540  143713 ssh_runner.go:362] scp /tmp/build.93282728.tar --> /var/lib/minikube/build/build.93282728.tar (3072 bytes)
I1004 00:56:21.224513  143713 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.93282728
I1004 00:56:21.251230  143713 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.93282728 -xf /var/lib/minikube/build/build.93282728.tar
I1004 00:56:21.269605  143713 crio.go:297] Building image: /var/lib/minikube/build/build.93282728
I1004 00:56:21.269695  143713 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-398727 /var/lib/minikube/build/build.93282728 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1004 00:56:24.074810  143713 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-398727 /var/lib/minikube/build/build.93282728 --cgroup-manager=cgroupfs: (2.80508755s)
I1004 00:56:24.074882  143713 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.93282728
I1004 00:56:24.086856  143713 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.93282728.tar
I1004 00:56:24.095838  143713 build_images.go:207] Built localhost/my-image:functional-398727 from /tmp/build.93282728.tar
I1004 00:56:24.095871  143713 build_images.go:123] succeeded building to: functional-398727
I1004 00:56:24.095877  143713 build_images.go:124] failed building to: 
I1004 00:56:24.095908  143713 main.go:141] libmachine: Making call to close driver server
I1004 00:56:24.095927  143713 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:24.096263  143713 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:24.096284  143713 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 00:56:24.096295  143713 main.go:141] libmachine: Making call to close driver server
I1004 00:56:24.096305  143713 main.go:141] libmachine: (functional-398727) Calling .Close
I1004 00:56:24.096552  143713 main.go:141] libmachine: Successfully made call to close driver server
I1004 00:56:24.096569  143713 main.go:141] libmachine: Making call to close connection to plugin binary
I1004 00:56:24.096572  143713 main.go:141] libmachine: (functional-398727) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls
E1004 00:56:25.677177  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
2023/10/04 00:56:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.295721165s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-398727
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-398727 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-398727 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-398727 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-398727 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 141185: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-398727 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-398727 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fa61c2a0-a713-406a-a0c8-7494842d2904] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fa61c2a0-a713-406a-a0c8-7494842d2904] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.041639745s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image load --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image load --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr: (3.95293402s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "211.18263ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "44.317466ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "221.201262ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "42.994315ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image load --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image load --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr: (2.370770644s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-398727 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.242.71 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-398727 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image save gcr.io/google-containers/addon-resizer:functional-398727 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image save gcr.io/google-containers/addon-resizer:functional-398727 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (4.549466976s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (4.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image rm gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.391893427s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-398727
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 image save --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-398727 image save --daemon gcr.io/google-containers/addon-resizer:functional-398727 --alsologtostderr: (6.010180541s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-398727
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-398727 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-398727 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-mn9hz" [8f19a19d-eed7-4c2b-982f-282d93ba3e54] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-mn9hz" [8f19a19d-eed7-4c2b-982f-282d93ba3e54] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.025502639s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdany-port637732815/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696380967869889980" to /tmp/TestFunctionalparallelMountCmdany-port637732815/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696380967869889980" to /tmp/TestFunctionalparallelMountCmdany-port637732815/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696380967869889980" to /tmp/TestFunctionalparallelMountCmdany-port637732815/001/test-1696380967869889980
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.477002ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  4 00:56 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  4 00:56 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  4 00:56 test-1696380967869889980
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh cat /mount-9p/test-1696380967869889980
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-398727 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0b46ddcc-776e-4181-af6b-028fa0705e90] Pending
E1004 00:56:10.316219  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [0b46ddcc-776e-4181-af6b-028fa0705e90] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0b46ddcc-776e-4181-af6b-028fa0705e90] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0b46ddcc-776e-4181-af6b-028fa0705e90] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.02307081s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-398727 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdany-port637732815/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdspecific-port182797788/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (189.899588ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdspecific-port182797788/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh "sudo umount -f /mount-9p": exit status 1 (191.065511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-398727 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdspecific-port182797788/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 service list -o json
functional_test.go:1493: Took "836.236487ms" to run "out/minikube-linux-amd64 -p functional-398727 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup466853554/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup466853554/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup466853554/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T" /mount1: exit status 1 (224.44997ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-398727 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup466853554/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup466853554/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-398727 /tmp/TestFunctionalparallelMountCmdVerifyCleanup466853554/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.25:31839
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-398727 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.25:31839
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-398727
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-398727
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-398727
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (77.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-533597 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1004 00:56:46.158284  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 00:57:27.119683  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-533597 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m17.662943991s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (77.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons enable ingress --alsologtostderr -v=5: (17.595921006s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.60s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533597 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.62s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-444777 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1004 01:01:32.881989  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:01:55.214035  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-444777 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m10.461696802s)
--- PASS: TestJSONOutput/start/Command (70.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-444777 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-444777 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-444777 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-444777 --output=json --user=testUser: (7.08510381s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-746971 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-746971 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.129953ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dbe729a1-9a81-4d48-a3b8-440b4336514f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-746971] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cab6db95-ff44-4186-ac9e-7d4bbe63686f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17348"}}
	{"specversion":"1.0","id":"6f4a1f24-1415-4306-8b68-03d94983ce1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2944baad-9643-4745-a94b-b54669631239","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig"}}
	{"specversion":"1.0","id":"dab5c368-a056-4c77-b5e6-b3335efe141e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube"}}
	{"specversion":"1.0","id":"9cdef689-5a12-485a-ab24-78513a0630d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e9865607-d762-49cf-a440-9a5fa6600012","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6256e5fe-5df6-4999-9802-58c348c715dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-746971" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-746971
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (96.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-156048 --driver=kvm2  --container-runtime=crio
E1004 01:03:15.375491  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:15.380907  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:15.391187  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:15.411445  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:15.451752  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:15.532141  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:15.692597  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:16.013191  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:16.654141  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:17.134436  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:03:17.935069  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-156048 --driver=kvm2  --container-runtime=crio: (45.344710268s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-159730 --driver=kvm2  --container-runtime=crio
E1004 01:03:20.495822  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:25.617037  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:35.858242  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:03:56.339126  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-159730 --driver=kvm2  --container-runtime=crio: (48.793839914s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-156048
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-159730
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-159730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-159730
helpers_test.go:175: Cleaning up "first-156048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-156048
--- PASS: TestMinikubeProfile (96.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-528023 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-528023 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.056435061s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-528023 ssh -- ls /minikube-host
E1004 01:04:37.299958  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-528023 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-548527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-548527 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.216964673s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-548527 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-548527 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-528023 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-548527 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-548527 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-548527
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-548527: (1.162713638s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-548527
E1004 01:05:33.291122  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-548527: (22.892679896s)
--- PASS: TestMountStart/serial/RestartStopped (23.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-548527 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-548527 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-038823 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1004 01:05:59.221733  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:06:00.975624  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:06:05.194983  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-038823 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.574837014s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-038823 -- rollout status deployment/busybox: (3.280690248s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-8g74z -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-ckxb4 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-8g74z -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-ckxb4 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-8g74z -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-038823 -- exec busybox-5bc68d56bd-ckxb4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-038823 -v 3 --alsologtostderr
E1004 01:08:15.375635  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-038823 -v 3 --alsologtostderr: (42.352407862s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.93s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.20s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp testdata/cp-test.txt multinode-038823:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823:/home/docker/cp-test.txt multinode-038823-m02:/home/docker/cp-test_multinode-038823_multinode-038823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m02 "sudo cat /home/docker/cp-test_multinode-038823_multinode-038823-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823:/home/docker/cp-test.txt multinode-038823-m03:/home/docker/cp-test_multinode-038823_multinode-038823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m03 "sudo cat /home/docker/cp-test_multinode-038823_multinode-038823-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp testdata/cp-test.txt multinode-038823-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt multinode-038823:/home/docker/cp-test_multinode-038823-m02_multinode-038823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823 "sudo cat /home/docker/cp-test_multinode-038823-m02_multinode-038823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823-m02:/home/docker/cp-test.txt multinode-038823-m03:/home/docker/cp-test_multinode-038823-m02_multinode-038823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m03 "sudo cat /home/docker/cp-test_multinode-038823-m02_multinode-038823-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp testdata/cp-test.txt multinode-038823-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile53245555/001/cp-test_multinode-038823-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt multinode-038823:/home/docker/cp-test_multinode-038823-m03_multinode-038823.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823 "sudo cat /home/docker/cp-test_multinode-038823-m03_multinode-038823.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 cp multinode-038823-m03:/home/docker/cp-test.txt multinode-038823-m02:/home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 ssh -n multinode-038823-m02 "sudo cat /home/docker/cp-test_multinode-038823-m03_multinode-038823-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-038823 node stop m03: (2.075766034s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-038823 status: exit status 7 (436.035871ms)

                                                
                                                
-- stdout --
	multinode-038823
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-038823-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-038823-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-038823 status --alsologtostderr: exit status 7 (434.243602ms)

                                                
                                                
-- stdout --
	multinode-038823
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-038823-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-038823-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:08:31.380567  150653 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:08:31.380811  150653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:08:31.380820  150653 out.go:309] Setting ErrFile to fd 2...
	I1004 01:08:31.380825  150653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:08:31.380976  150653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:08:31.381149  150653 out.go:303] Setting JSON to false
	I1004 01:08:31.381182  150653 mustload.go:65] Loading cluster: multinode-038823
	I1004 01:08:31.381299  150653 notify.go:220] Checking for updates...
	I1004 01:08:31.381566  150653 config.go:182] Loaded profile config "multinode-038823": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:08:31.381581  150653 status.go:255] checking status of multinode-038823 ...
	I1004 01:08:31.381959  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.382027  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.398288  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37167
	I1004 01:08:31.398808  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.399370  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.399389  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.399790  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.400002  150653 main.go:141] libmachine: (multinode-038823) Calling .GetState
	I1004 01:08:31.401588  150653 status.go:330] multinode-038823 host status = "Running" (err=<nil>)
	I1004 01:08:31.401602  150653 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:08:31.401889  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.401931  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.416676  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I1004 01:08:31.417201  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.417731  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.417761  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.418125  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.418310  150653 main.go:141] libmachine: (multinode-038823) Calling .GetIP
	I1004 01:08:31.420911  150653 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:08:31.421257  150653 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:08:31.421343  150653 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:08:31.421418  150653 host.go:66] Checking if "multinode-038823" exists ...
	I1004 01:08:31.421735  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.421781  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.437035  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46131
	I1004 01:08:31.437420  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.437983  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.438007  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.438320  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.438544  150653 main.go:141] libmachine: (multinode-038823) Calling .DriverName
	I1004 01:08:31.438725  150653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 01:08:31.438769  150653 main.go:141] libmachine: (multinode-038823) Calling .GetSSHHostname
	I1004 01:08:31.441527  150653 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:08:31.441986  150653 main.go:141] libmachine: (multinode-038823) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:cd:1c", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:05:53 +0000 UTC Type:0 Mac:52:54:00:89:cd:1c Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-038823 Clientid:01:52:54:00:89:cd:1c}
	I1004 01:08:31.442015  150653 main.go:141] libmachine: (multinode-038823) DBG | domain multinode-038823 has defined IP address 192.168.39.212 and MAC address 52:54:00:89:cd:1c in network mk-multinode-038823
	I1004 01:08:31.442156  150653 main.go:141] libmachine: (multinode-038823) Calling .GetSSHPort
	I1004 01:08:31.442325  150653 main.go:141] libmachine: (multinode-038823) Calling .GetSSHKeyPath
	I1004 01:08:31.442464  150653 main.go:141] libmachine: (multinode-038823) Calling .GetSSHUsername
	I1004 01:08:31.442586  150653 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823/id_rsa Username:docker}
	I1004 01:08:31.541451  150653 ssh_runner.go:195] Run: systemctl --version
	I1004 01:08:31.547412  150653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:08:31.561505  150653 kubeconfig.go:92] found "multinode-038823" server: "https://192.168.39.212:8443"
	I1004 01:08:31.561543  150653 api_server.go:166] Checking apiserver status ...
	I1004 01:08:31.561582  150653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1004 01:08:31.573093  150653 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1097/cgroup
	I1004 01:08:31.586410  150653 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/podf34f143a5b95a664a6f0b6f04bfc8d7d/crio-06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1"
	I1004 01:08:31.586492  150653 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf34f143a5b95a664a6f0b6f04bfc8d7d/crio-06000bf34eaaec207676cdc33e5ab68fc02cb8a161be28010dc9ea95e45451b1/freezer.state
	I1004 01:08:31.596071  150653 api_server.go:204] freezer state: "THAWED"
	I1004 01:08:31.596103  150653 api_server.go:253] Checking apiserver healthz at https://192.168.39.212:8443/healthz ...
	I1004 01:08:31.601086  150653 api_server.go:279] https://192.168.39.212:8443/healthz returned 200:
	ok
	I1004 01:08:31.601113  150653 status.go:421] multinode-038823 apiserver status = Running (err=<nil>)
	I1004 01:08:31.601125  150653 status.go:257] multinode-038823 status: &{Name:multinode-038823 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1004 01:08:31.601150  150653 status.go:255] checking status of multinode-038823-m02 ...
	I1004 01:08:31.601597  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.601635  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.616636  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40785
	I1004 01:08:31.617052  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.617585  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.617607  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.617951  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.618155  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .GetState
	I1004 01:08:31.619745  150653 status.go:330] multinode-038823-m02 host status = "Running" (err=<nil>)
	I1004 01:08:31.619763  150653 host.go:66] Checking if "multinode-038823-m02" exists ...
	I1004 01:08:31.620031  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.620054  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.636679  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I1004 01:08:31.637068  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.637525  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.637577  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.637996  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.638183  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .GetIP
	I1004 01:08:31.640716  150653 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:08:31.641212  150653 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:08:31.641250  150653 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:08:31.641442  150653 host.go:66] Checking if "multinode-038823-m02" exists ...
	I1004 01:08:31.641743  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.641778  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.656684  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42777
	I1004 01:08:31.657108  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.657559  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.657589  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.657972  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.658172  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .DriverName
	I1004 01:08:31.658391  150653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1004 01:08:31.658421  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHHostname
	I1004 01:08:31.661169  150653 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:08:31.661736  150653 main.go:141] libmachine: (multinode-038823-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:fe:89", ip: ""} in network mk-multinode-038823: {Iface:virbr1 ExpiryTime:2023-10-04 02:07:02 +0000 UTC Type:0 Mac:52:54:00:57:fe:89 Iaid: IPaddr:192.168.39.181 Prefix:24 Hostname:multinode-038823-m02 Clientid:01:52:54:00:57:fe:89}
	I1004 01:08:31.661774  150653 main.go:141] libmachine: (multinode-038823-m02) DBG | domain multinode-038823-m02 has defined IP address 192.168.39.181 and MAC address 52:54:00:57:fe:89 in network mk-multinode-038823
	I1004 01:08:31.661942  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHPort
	I1004 01:08:31.662086  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHKeyPath
	I1004 01:08:31.662218  150653 main.go:141] libmachine: (multinode-038823-m02) Calling .GetSSHUsername
	I1004 01:08:31.662319  150653 sshutil.go:53] new ssh client: &{IP:192.168.39.181 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17348-128338/.minikube/machines/multinode-038823-m02/id_rsa Username:docker}
	I1004 01:08:31.744902  150653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1004 01:08:31.757890  150653 status.go:257] multinode-038823-m02 status: &{Name:multinode-038823-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1004 01:08:31.757956  150653 status.go:255] checking status of multinode-038823-m03 ...
	I1004 01:08:31.758430  150653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1004 01:08:31.758474  150653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1004 01:08:31.773345  150653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34069
	I1004 01:08:31.773888  150653 main.go:141] libmachine: () Calling .GetVersion
	I1004 01:08:31.774429  150653 main.go:141] libmachine: Using API Version  1
	I1004 01:08:31.774465  150653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1004 01:08:31.774819  150653 main.go:141] libmachine: () Calling .GetMachineName
	I1004 01:08:31.774966  150653 main.go:141] libmachine: (multinode-038823-m03) Calling .GetState
	I1004 01:08:31.776427  150653 status.go:330] multinode-038823-m03 host status = "Stopped" (err=<nil>)
	I1004 01:08:31.776444  150653 status.go:343] host is not running, skipping remaining checks
	I1004 01:08:31.776452  150653 status.go:257] multinode-038823-m03 status: &{Name:multinode-038823-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.95s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 node start m03 --alsologtostderr
E1004 01:08:43.064478  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-038823 node start m03 --alsologtostderr: (30.555246845s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-038823 node delete m03: (1.225526089s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status --alsologtostderr
E1004 01:20:33.290992  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (443.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-038823 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1004 01:23:15.375637  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:25:33.292120  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:26:05.194814  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:28:15.375066  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:29:08.243047  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-038823 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m22.768420213s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-038823 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (443.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-038823
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-038823-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-038823-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (56.194599ms)

                                                
                                                
-- stdout --
	* [multinode-038823-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-038823-m02' is duplicated with machine name 'multinode-038823-m02' in profile 'multinode-038823'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-038823-m03 --driver=kvm2  --container-runtime=crio
E1004 01:30:33.291316  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:31:05.194672  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-038823-m03 --driver=kvm2  --container-runtime=crio: (48.633742692s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-038823
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-038823: exit status 80 (220.81448ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-038823
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-038823-m03 already exists in multinode-038823-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-038823-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.92s)

                                                
                                    
x
+
TestScheduledStopUnix (120.59s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-934910 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-934910 --memory=2048 --driver=kvm2  --container-runtime=crio: (49.013646316s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934910 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-934910 -n scheduled-stop-934910
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934910 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934910 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934910 -n scheduled-stop-934910
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-934910
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-934910 --schedule 15s
E1004 01:35:33.291187  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1004 01:36:05.194277  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-934910
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-934910: exit status 7 (72.243439ms)

                                                
                                                
-- stdout --
	scheduled-stop-934910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934910 -n scheduled-stop-934910
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-934910 -n scheduled-stop-934910: exit status 7 (60.192719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-934910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-934910
--- PASS: TestScheduledStopUnix (120.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.32s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.32s)

                                                
                                    
x
+
TestPause/serial/Start (81.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-720999 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-720999 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m21.514147008s)
--- PASS: TestPause/serial/Start (81.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294276 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-294276 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (58.247607ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-294276] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294276 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294276 --driver=kvm2  --container-runtime=crio: (1m18.562330162s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-294276 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-171116 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-171116 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (101.033722ms)

                                                
                                                
-- stdout --
	* [false-171116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17348
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1004 01:37:50.756035  160550 out.go:296] Setting OutFile to fd 1 ...
	I1004 01:37:50.756297  160550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:37:50.756308  160550 out.go:309] Setting ErrFile to fd 2...
	I1004 01:37:50.756315  160550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1004 01:37:50.756518  160550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17348-128338/.minikube/bin
	I1004 01:37:50.757093  160550 out.go:303] Setting JSON to false
	I1004 01:37:50.758073  160550 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":8422,"bootTime":1696375049,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1004 01:37:50.758140  160550 start.go:138] virtualization: kvm guest
	I1004 01:37:50.760256  160550 out.go:177] * [false-171116] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1004 01:37:50.762090  160550 out.go:177]   - MINIKUBE_LOCATION=17348
	I1004 01:37:50.763385  160550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1004 01:37:50.762124  160550 notify.go:220] Checking for updates...
	I1004 01:37:50.765995  160550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17348-128338/kubeconfig
	I1004 01:37:50.767442  160550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17348-128338/.minikube
	I1004 01:37:50.768837  160550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1004 01:37:50.770076  160550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1004 01:37:50.771712  160550 config.go:182] Loaded profile config "NoKubernetes-294276": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:37:50.771806  160550 config.go:182] Loaded profile config "force-systemd-env-874915": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:37:50.771885  160550 config.go:182] Loaded profile config "pause-720999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1004 01:37:50.771956  160550 driver.go:373] Setting default libvirt URI to qemu:///system
	I1004 01:37:50.809474  160550 out.go:177] * Using the kvm2 driver based on user configuration
	I1004 01:37:50.810893  160550 start.go:298] selected driver: kvm2
	I1004 01:37:50.810909  160550 start.go:902] validating driver "kvm2" against <nil>
	I1004 01:37:50.810922  160550 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1004 01:37:50.812919  160550 out.go:177] 
	W1004 01:37:50.814471  160550 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1004 01:37:50.815883  160550 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-171116 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-171116" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-171116

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-171116"

                                                
                                                
----------------------- debugLogs end: false-171116 [took: 2.67674559s] --------------------------------
helpers_test.go:175: Cleaning up "false-171116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-171116
--- PASS: TestNetworkPlugins/group/false (2.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (46.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294276 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294276 --no-kubernetes --driver=kvm2  --container-runtime=crio: (45.581778712s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-294276 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-294276 status -o json: exit status 2 (285.623892ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-294276","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-294276
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-294276: (1.09523012s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (46.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294276 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294276 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.867436387s)
--- PASS: TestNoKubernetes/serial/Start (26.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-107182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-107182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m42.531717302s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (123.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-273516 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-273516 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (2m3.796383293s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (123.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-294276 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-294276 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.948624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-294276
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-294276: (1.240577531s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (69.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-294276 --driver=kvm2  --container-runtime=crio
E1004 01:40:33.290670  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:41:05.194718  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-294276 --driver=kvm2  --container-runtime=crio: (1m9.504072089s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (69.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-294276 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-294276 "sudo systemctl is-active --quiet service kubelet": exit status 1 (201.226272ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (65.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-509298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-509298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m5.246471192s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (65.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-273516 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [16cc2d74-3565-4360-9899-bd029b8d2c9d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [16cc2d74-3565-4360-9899-bd029b8d2c9d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.037251513s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-273516 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-273516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-273516 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.1207229s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-273516 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-509298 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bf74dc00-3baa-45c0-93b9-11551bd87073] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bf74dc00-3baa-45c0-93b9-11551bd87073] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.037198708s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-509298 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-107182 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a0d4b160-90a1-4ecb-8351-4ab2bcc1193f] Pending
helpers_test.go:344: "busybox" [a0d4b160-90a1-4ecb-8351-4ab2bcc1193f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a0d4b160-90a1-4ecb-8351-4ab2bcc1193f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.037794935s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-107182 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-509298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-509298 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032965578s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-509298 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-107182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-107182 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-487861 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E1004 01:43:15.375294  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-487861 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (1m0.466578613s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-487861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-487861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.531656155s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-487861 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-487861 --alsologtostderr -v=3: (11.103573557s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-487861 -n newest-cni-487861
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-487861 -n newest-cni-487861: exit status 7 (65.836739ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-487861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-487861 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-487861 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (49.28393843s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-487861 -n newest-cni-487861
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (676.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-273516 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-273516 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (11m16.647240327s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-273516 -n no-preload-273516
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (676.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-487861 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-487861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-487861 -n newest-cni-487861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-487861 -n newest-cni-487861: exit status 2 (254.312057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-487861 -n newest-cni-487861
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-487861 -n newest-cni-487861: exit status 2 (238.31956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-487861 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-487861 -n newest-cni-487861
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-487861 -n newest-cni-487861
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (326.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-239802 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-239802 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (5m26.23552145s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (326.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (625.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-509298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-509298 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (10m25.11562559s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-509298 -n embed-certs-509298
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (625.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (750.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-107182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E1004 01:45:33.291224  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
E1004 01:45:48.243856  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:46:05.194923  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
E1004 01:48:15.375389  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
E1004 01:50:16.338563  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-107182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (12m30.289526726s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-107182 -n old-k8s-version-107182
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (750.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af60dd41-4639-4ca1-a42a-1cc8276b93d8] Pending
helpers_test.go:344: "busybox" [af60dd41-4639-4ca1-a42a-1cc8276b93d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af60dd41-4639-4ca1-a42a-1cc8276b93d8] Running
E1004 01:50:33.291256  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.02530539s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-239802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-239802 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.899327627s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-239802 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (624.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-239802 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2
E1004 01:53:15.375132  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/ingress-addon-legacy-533597/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-239802 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.2: (10m24.142877739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-239802 -n default-k8s-diff-port-239802
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (624.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m17.474311s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1004 02:10:33.291531  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/functional-398727/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m27.939262246s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qklgn" [333005c8-cef1-40aa-be2b-035394a327f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1004 02:11:05.194299  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/addons-718830/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qklgn" [333005c8-cef1-40aa-be2b-035394a327f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.01465189s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (97.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m37.493539652s)
--- PASS: TestNetworkPlugins/group/calico/Start (97.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qv5kz" [5aa3abd5-6c7d-459c-9b03-af0d957bf744] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025773415s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v8949" [0624b1cb-0c18-432a-9ec7-3f3ced8b463a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v8949" [0624b1cb-0c18-432a-9ec7-3f3ced8b463a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.016020697s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m41.298815877s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1004 02:12:04.289707  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/no-preload-273516/client.crt: no such file or directory
E1004 02:12:09.410860  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/no-preload-273516/client.crt: no such file or directory
E1004 02:12:19.651666  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/no-preload-273516/client.crt: no such file or directory
E1004 02:12:24.941964  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:24.947256  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:24.957577  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:24.977962  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:25.018600  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:25.098927  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:25.259279  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:25.580229  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:26.220905  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:27.502140  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:30.062704  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
E1004 02:12:35.183654  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.311487318s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rv865" [f9961883-88af-4b03-af47-18ee08dab91d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.035347221s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qxkc4" [38ef470c-76c2-400a-bfa7-803c422c51d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1004 02:13:05.905388  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-qxkc4" [38ef470c-76c2-400a-bfa7-803c422c51d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.015515329s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-kntz5" [a3820e98-f67b-45a7-afb4-768bdd817732] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-kntz5" [a3820e98-f67b-45a7-afb4-768bdd817732] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.023816714s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dxw6q" [7b249087-b308-442c-a32c-a8a9187e0117] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dxw6q" [7b249087-b308-442c-a32c-a8a9187e0117] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.013705116s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m31.303288727s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (79.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1004 02:13:46.866223  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-171116 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m19.93572924s)
--- PASS: TestNetworkPlugins/group/bridge/Start (79.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gmx7t" [6a957a01-418f-4c4c-a221-338993bb0cf5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.030928961s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xbwjt" [ea86a971-8e39-414a-9df6-73151842e635] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xbwjt" [ea86a971-8e39-414a-9df6-73151842e635] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.015227872s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-171116 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-171116 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-b9d7w" [dec417b7-52d9-44d0-908f-db7012e6da39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1004 02:15:08.787055  135565 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17348-128338/.minikube/profiles/old-k8s-version-107182/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-b9d7w" [dec417b7-52d9-44d0-908f-db7012e6da39] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.019945832s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-171116 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-171116 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    

Test skip (31/290)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-554732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-554732
--- SKIP: TestStartStop/group/disable-driver-mounts (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-171116 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-171116" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-171116

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-171116"

                                                
                                                
----------------------- debugLogs end: kubenet-171116 [took: 2.798664149s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-171116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-171116
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-171116 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-171116" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-171116

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-171116" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-171116"

                                                
                                                
----------------------- debugLogs end: cilium-171116 [took: 3.070402234s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-171116" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-171116
--- SKIP: TestNetworkPlugins/group/cilium (3.21s)

                                                
                                    
Copied to clipboard